Data Poisoning Attacks to Deep Learning Based Recommender Systems
- URL: http://arxiv.org/abs/2101.02644v2
- Date: Fri, 8 Jan 2021 12:26:17 GMT
- Title: Data Poisoning Attacks to Deep Learning Based Recommender Systems
- Authors: Hai Huang, Jiaming Mu, Neil Zhenqiang Gong, Qi Li, Bin Liu, Mingwei Xu
- Abstract summary: We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
- Score: 26.743631067729677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems play a crucial role in helping users to find their
interested information in various web services such as Amazon, YouTube, and
Google News. Various recommender systems, ranging from neighborhood-based,
association-rule-based, matrix-factorization-based, to deep learning based,
have been developed and deployed in industry. Among them, deep learning based
recommender systems become increasingly popular due to their superior
performance.
In this work, we conduct the first systematic study on data poisoning attacks
to deep learning based recommender systems. An attacker's goal is to manipulate
a recommender system such that the attacker-chosen target items are recommended
to many users. To achieve this goal, our attack injects fake users with
carefully crafted ratings to a recommender system. Specifically, we formulate
our attack as an optimization problem, such that the injected ratings would
maximize the number of normal users to whom the target items are recommended.
However, it is challenging to solve the optimization problem because it is a
non-convex integer programming problem. To address the challenge, we develop
multiple techniques to approximately solve the optimization problem. Our
experimental results on three real-world datasets, including small and large
datasets, show that our attack is effective and outperforms existing attacks.
Moreover, we attempt to detect fake users via statistical analysis of the
rating patterns of normal and fake users. Our results show that our attack is
still effective and outperforms existing attacks even if such a detector is
deployed.
Related papers
- Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z) - Revisiting Adversarially Learned Injection Attacks Against Recommender
Systems [6.920518936054493]
This paper revisits the adversarially-learned injection attack problem.
We show that the exact solution for generating fake users as an optimization problem could lead to a much larger impact.
arXiv Detail & Related papers (2020-08-11T17:30:02Z) - Influence Function based Data Poisoning Attacks to Top-N Recommender
Systems [43.14766256772]
An attacker can trick a recommender system to recommend a target item to as many normal users as possible.
We develop a data poisoning attack to solve this problem.
Our results show that our attacks are effective and outperform existing methods.
arXiv Detail & Related papers (2020-02-19T06:41:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.