Exploring Semantic Perturbations on Grover
- URL: http://arxiv.org/abs/2302.00509v2
- Date: Thu, 25 Jul 2024 01:09:57 GMT
- Title: Exploring Semantic Perturbations on Grover
- Authors: Ziqing Ji, Pranav Kulkarni, Marko Neskovic, Kevin Nolan, Yan Xu,
- Abstract summary: The rise of neural fake news (AI-generated fake news) has prompted the development of models to detect it.
One such model is the Grover model, which can both detect neural fake news to prevent it, and generate it to demonstrate how a model could be misused.
In this work we explore the Grover model's fake news detection capabilities by performing targeted attacks through perturbations on input news articles.
- Score: 6.515466136870902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With news and information being as easy to access as they currently are, it is more important than ever to ensure that people are not mislead by what they read. Recently, the rise of neural fake news (AI-generated fake news) and its demonstrated effectiveness at fooling humans has prompted the development of models to detect it. One such model is the Grover model, which can both detect neural fake news to prevent it, and generate it to demonstrate how a model could be misused to fool human readers. In this work we explore the Grover model's fake news detection capabilities by performing targeted attacks through perturbations on input news articles. Through this we test Grover's resilience to these adversarial attacks and expose some potential vulnerabilities which should be addressed in further iterations to ensure it can detect all types of fake news accurately.
Related papers
- Adapting Fake News Detection to the Era of Large Language Models [48.5847914481222]
We study the interplay between machine-(paraphrased) real news, machine-generated fake news, human-written fake news, and human-written real news.
Our experiments reveal an interesting pattern that detectors trained exclusively on human-written articles can indeed perform well at detecting machine-generated fake news, but not vice versa.
arXiv Detail & Related papers (2023-11-02T08:39:45Z) - Human Brains Can't Detect Fake News: A Neuro-Cognitive Study of Textual
Disinformation Susceptibility [2.131521514043068]
"Fake news" is arguably one of the most significant threats on the Internet.
Fake news attacks hinge on whether Internet users perceive a fake news article/snippet to be legitimate after reading it.
We investigate the neural underpinnings relevant to fake/real news through EEG.
arXiv Detail & Related papers (2022-07-18T04:31:07Z) - Faking Fake News for Real Fake News Detection: Propaganda-loaded
Training Data Generation [105.20743048379387]
We propose a novel framework for generating training examples informed by the known styles and strategies of human-authored propaganda.
Specifically, we perform self-critical sequence training guided by natural language inference to ensure the validity of the generated articles.
Our experimental results show that fake news detectors trained on PropaNews are better at detecting human-written disinformation by 3.62 - 7.69% F1 score on two public datasets.
arXiv Detail & Related papers (2022-03-10T14:24:19Z) - Automated Fake News Detection using cross-checking with reliable sources [0.0]
We use natural human behavior to cross-check new information with reliable sources.
We implement this for Twitter and build a model that flags fake tweets.
Our implementation of this approach gives a $70%$ accuracy which outperforms other generic fake-news classification models.
arXiv Detail & Related papers (2022-01-01T00:59:58Z) - How Vulnerable Are Automatic Fake News Detection Methods to Adversarial
Attacks? [0.6882042556551611]
This paper shows that it is possible to automatically attack state-of-the-art models that have been trained to detect Fake News.
The results show that it is possible to automatically bypass Fake News detection mechanisms, leading to implications concerning existing policy initiatives.
arXiv Detail & Related papers (2021-07-16T15:36:03Z) - User Preference-aware Fake News Detection [61.86175081368782]
Existing fake news detection algorithms focus on mining news content for deceptive signals.
We propose a new framework, UPFD, which simultaneously captures various signals from user preferences by joint content and graph modeling.
arXiv Detail & Related papers (2021-04-25T21:19:24Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - MALCOM: Generating Malicious Comments to Attack Neural Fake News
Detection Models [40.51057705796747]
MALCOM is an end-to-end adversarial comment generation framework to achieve such an attack.
We demonstrate that about 94% and 93.5% of the time on average MALCOM can successfully mislead five of the latest neural detection models.
We also compare our attack model with four baselines across two real-world datasets.
arXiv Detail & Related papers (2020-09-01T01:26:01Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.