Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios
- URL: http://arxiv.org/abs/2204.13594v1
- Date: Tue, 26 Apr 2022 15:23:05 GMT
- Title: Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios
- Authors: Dazhong Rong, Qinming He, Jianhai Chen
- Abstract summary: We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
- Score: 7.409990425668484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various attack methods against recommender systems have been proposed in the
past years, and the security issues of recommender systems have drawn
considerable attention. Traditional attacks attempt to make target items
recommended to as many users as possible by poisoning the training data.
Benifiting from the feature of protecting users' private data, federated
recommendation can effectively defend such attacks. Therefore, quite a few
works have devoted themselves to developing federated recommender systems. For
proving current federated recommendation is still vulnerable, in this work we
probe to design attack approaches targeting deep learning based recommender
models in federated learning scenarios. Specifically, our attacks generate
poisoned gradients for manipulated malicious users to upload based on two
strategies (i.e., random approximation and hard user mining). Extensive
experiments show that our well-designed attacks can effectively poison the
target models, and the attack effectiveness sets the state-of-the-art.
Related papers
- Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z) - Data Poisoning Attacks to Deep Learning Based Recommender Systems [26.743631067729677]
We conduct first systematic study of data poisoning attacks against deep learning based recommender systems.
An attacker's goal is to manipulate a recommender system such that the attacker-chosen target items are recommended to many users.
To achieve this goal, our attack injects fake users with carefully crafted ratings to a recommender system.
arXiv Detail & Related papers (2021-01-07T17:32:56Z) - Adversarial Attacks and Detection on Reinforcement Learning-Based
Interactive Recommender Systems [47.70973322193384]
Adversarial attacks pose significant challenges for detecting them at an early stage.
We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems.
We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks.
arXiv Detail & Related papers (2020-06-14T15:41:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.