Targeted Data Poisoning Attack on News Recommendation System
- URL: http://arxiv.org/abs/2203.03560v1
- Date: Fri, 4 Mar 2022 16:01:11 GMT
- Title: Targeted Data Poisoning Attack on News Recommendation System
- Authors: Xudong Zhang, Zan Wang, Jingke Zhao, Lanjun Wang
- Abstract summary: News Recommendation System (NRS) has become a fundamental technology to many online news services.
We propose a novel approach to poison the NRS, which is to perturb contents of some browsed news that results in the manipulation of the rank of the target news.
We design a reinforcement learning framework, called TDP-CP, which contains a two-stage hierarchical model to reduce the searching space.
- Score: 10.1794489884216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: News Recommendation System(NRS) has become a fundamental technology to many
online news services. Meanwhile, several studies show that recommendation
systems(RS) are vulnerable to data poisoning attacks, and the attackers have
the ability to mislead the system to perform as their desires. A widely studied
attack approach, injecting fake users, can be applied on the NRS when the NRS
is treated the same as the other systems whose items are fixed. However, in the
NRS, as each item (i.e. news) is more informative, we propose a novel approach
to poison the NRS, which is to perturb contents of some browsed news that
results in the manipulation of the rank of the target news. Intuitively, an
attack is useless if it is highly likely to be caught, i.e., exposed. To
address this, we introduce a notion of the exposure risk and propose a novel
problem of attacking a history news dataset by means of perturbations where the
goal is to maximize the manipulation of the target news rank while keeping the
risk of exposure under a given budget. We design a reinforcement learning
framework, called TDP-CP, which contains a two-stage hierarchical model to
reduce the searching space. Meanwhile, influence estimation is also applied to
save the time on retraining the NRS for rewards. We test the performance of
TDP-CP under three NRSs and on different target news. Our experiments show that
TDP-CP can increase the rank of the target news successfully with a limited
exposure budget.
Related papers
- Not So Robust After All: Evaluating the Robustness of Deep Neural
Networks to Unseen Adversarial Attacks [5.024667090792856]
Deep neural networks (DNNs) have gained prominence in various applications, such as classification, recognition, and prediction.
A fundamental attribute of traditional DNNs is their vulnerability to modifications in input data, which has resulted in the investigation of adversarial attacks.
This study aims to challenge the efficacy and generalization of contemporary defense mechanisms against adversarial attacks.
arXiv Detail & Related papers (2023-08-12T05:21:34Z) - Uncertainty-Aware Reward-based Deep Reinforcement Learning for Intent
Analysis of Social Media Information [17.25399815431264]
Distinguishing the types of fake news spreaders based on their intent is critical.
We propose an intent classification framework that can best identify the correct intent of fake news.
arXiv Detail & Related papers (2023-02-19T00:54:33Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A Targeted Attack on Black-Box Neural Machine Translation with Parallel
Data Poisoning [60.826628282900955]
We show that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data.
We show that this attack can be realised practically via targeted corruption of web documents crawled to form the system's training data.
Our results are alarming: even on the state-of-the-art systems trained with massive parallel data, the attacks are still successful (over 50% success rate) under surprisingly low poisoning budgets.
arXiv Detail & Related papers (2020-11-02T01:52:46Z) - DeepDyve: Dynamic Verification for Deep Neural Networks [16.20238078882485]
DeepDyve employs pre-trained neural networks that are far simpler and smaller than the original DNN for dynamic verification.
We develop efficient and effective architecture and task exploration techniques to achieve optimized risk/overhead trade-off in DeepDyve.
arXiv Detail & Related papers (2020-09-21T07:58:18Z) - Adversarial Exposure Attack on Diabetic Retinopathy Imagery Grading [75.73437831338907]
Diabetic Retinopathy (DR) is a leading cause of vision loss around the world.
To help diagnose it, numerous cutting-edge works have built powerful deep neural networks (DNNs) to automatically grade DR via retinal fundus images (RFIs)
RFIs are commonly affected by camera exposure issues that may lead to incorrect grades.
In this paper, we study this problem from the viewpoint of adversarial attacks.
arXiv Detail & Related papers (2020-09-19T13:47:33Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - With Great Dispersion Comes Greater Resilience: Efficient Poisoning
Attacks and Defenses for Linear Regression Models [28.680562906669216]
We analyze how attackers may interfere with the results of regression learning by poisoning datasets.
Our attack, termed Nopt, can produce larger errors with the same proportion of poisoning data-points.
Our new defense algorithm, termed Proda, demonstrates an increased effectiveness in reducing errors.
arXiv Detail & Related papers (2020-06-21T22:36:42Z) - Transferable, Controllable, and Inconspicuous Adversarial Attacks on
Person Re-identification With Deep Mis-Ranking [83.48804199140758]
We propose a learning-to-mis-rank formulation to perturb the ranking of the system output.
We also perform a back-box attack by developing a novel multi-stage network architecture.
Our method can control the number of malicious pixels by using differentiable multi-shot sampling.
arXiv Detail & Related papers (2020-04-08T18:48:29Z) - Adversarial Ranking Attack and Defense [36.221005892593595]
We propose two attacks against deep ranking systems that can raise or lower the rank of chosen candidates by adversarial perturbations.
A defense method is also proposed to improve the ranking robustness system, which can mitigate all the proposed attacks simultaneously.
Our adversarial ranking attacks and defense are evaluated on datasets including MNIST, Fashion-MNIST, and Stanford-Online-Products.
arXiv Detail & Related papers (2020-02-26T04:03:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.