Deep Breath: A Machine Learning Browser Extension to Tackle Online
Misinformation
- URL: http://arxiv.org/abs/2301.03301v1
- Date: Mon, 9 Jan 2023 12:43:58 GMT
- Title: Deep Breath: A Machine Learning Browser Extension to Tackle Online
Misinformation
- Authors: Marc Kydd, Lynsay A. Shepherd
- Abstract summary: This paper proposes a novel system for detecting, processing, and warning users about misleading content online.
By training a machine learning model on an existing dataset of 32,000 clickbait news article headlines, the model predicts how sensationalist a headline is.
It interfaces with a web browser extension which constructs a unique content warning notification based on existing design principles.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, the media landscape has seen a radical shift. As more
of the public stay informed of current events via online sources, competition
has grown as outlets vie for attention. This competition has prompted some
online outlets to publish sensationalist and alarmist content to grab readers'
attention. Such practices may threaten democracy by distorting the truth and
misleading readers about the nature of events. This paper proposes a novel
system for detecting, processing, and warning users about misleading content
online to combat the threats posed by misinformation. By training a machine
learning model on an existing dataset of 32,000 clickbait news article
headlines, the model predicts how sensationalist a headline is and then
interfaces with a web browser extension which constructs a unique content
warning notification based on existing design principles and incorporates the
models' prediction. This research makes a novel contribution to machine
learning and human-centred security with promising findings for future
research. By warning users when they may be viewing misinformation, it is
possible to prevent spontaneous reactions, helping users to take a deep breath
and approach online media with a clear mind.
Related papers
- HonestBait: Forward References for Attractive but Faithful Headline
Generation [13.456581900511873]
Forward references (FRs) are a writing technique often used for clickbait.
A self-verification process is included during training to avoid spurious inventions.
We present PANCO1, an innovative dataset containing pairs of fake news with verified news for attractive but faithful news headline generation.
arXiv Detail & Related papers (2023-06-26T16:34:37Z) - Multilingual Disinformation Detection for Digital Advertising [0.9684919127633844]
We make the first step towards quickly detecting and red-flaging websites that potentially manipulate the public with disinformation.
We build a machine learning model based on multilingual text embeddings that first determines whether the page mentions a topic of interest, then estimates the likelihood of the content being malicious.
Our system empowers internal teams to proactively blacklist unsafe content, thus protecting the reputation of the advertisement provider.
arXiv Detail & Related papers (2022-07-04T10:29:20Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - A Study of Fake News Reading and Annotating in Social Media Context [1.0499611180329804]
We present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
In a second run, we asked the participants to decide on the truthfulness of these articles.
We also describe a follow-up qualitative study with a similar scenario but this time with 7 expert fake news annotators.
arXiv Detail & Related papers (2021-09-26T08:11:17Z) - Defending Democracy: Using Deep Learning to Identify and Prevent
Misinformation [0.0]
This study classifies and visualizes the spread of misinformation on a social media network using publicly available Twitter data.
The study further demonstrates the suitability of BERT for providing a scalable model for false information detection.
arXiv Detail & Related papers (2021-06-03T16:34:54Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - BaitWatcher: A lightweight web interface for the detection of
incongruent news headlines [27.29585619643952]
BaitWatcher is a lightweight web interface that guides readers in estimating the likelihood of incongruence in news articles before clicking on the headlines.
BaiittWatcher utilizes a hierarchical recurrent encoder that efficiently learns complex textual representations of a news headline and its associated body text.
arXiv Detail & Related papers (2020-03-23T23:43:02Z) - Mining Disinformation and Fake News: Concepts, Methods, and Recent
Advancements [55.33496599723126]
disinformation including fake news has become a global phenomenon due to its explosive growth.
Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation.
arXiv Detail & Related papers (2020-01-02T21:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.