PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion
- URL: http://arxiv.org/abs/2110.10926v1
- Date: Thu, 21 Oct 2021 06:48:35 GMT
- Title: PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion
- Authors: Shijie Zhang and Hongzhi Yin and Tong Chen and Zi Huang and Quoc Viet
Hung Nguyen and Lizhen Cui
- Abstract summary: A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
- Score: 58.870444954499014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the growing privacy concerns, decentralization emerges rapidly in
personalized services, especially recommendation. Also, recent studies have
shown that centralized models are vulnerable to poisoning attacks, compromising
their integrity. In the context of recommender systems, a typical goal of such
poisoning attacks is to promote the adversary's target items by interfering
with the training dataset and/or process. Hence, a common practice is to
subsume recommender systems under the decentralized federated learning
paradigm, which enables all user devices to collaboratively learn a global
recommender while retaining all the sensitive data locally. Without exposing
the full knowledge of the recommender and entire dataset to end-users, such
federated recommendation is widely regarded `safe' towards poisoning attacks.
In this paper, we present a systematic approach to backdooring federated
recommender systems for targeted item promotion. The core tactic is to take
advantage of the inherent popularity bias that commonly exists in data-driven
recommenders. As popular items are more likely to appear in the recommendation
list, our innovatively designed attack model enables the target item to have
the characteristics of popular items in the embedding space. Then, by uploading
carefully crafted gradients via a small number of malicious users during the
model update, we can effectively increase the exposure rate of a target
(unpopular) item in the resulted federated recommender. Evaluations on two
real-world datasets show that 1) our attack model significantly boosts the
exposure rate of the target item in a stealthy way, without harming the
accuracy of the poisoned recommender; and 2) existing defenses are not
effective enough, highlighting the need for new defenses against our local
model poisoning attacks to federated recommender systems.
Related papers
- Shadow-Free Membership Inference Attacks: Recommender Systems Are More Vulnerable Than You Thought [43.490918008927]
We propose shadow-free MIAs that directly leverage a user's recommendations for membership inference.
Our attack achieves far better attack accuracy with low false positive rates than baselines.
arXiv Detail & Related papers (2024-05-11T13:52:22Z) - Poisoning Decentralized Collaborative Recommender System and Its Countermeasures [37.205493894131635]
We present a novel attack method named Poisoning with Adaptive Malicious Neighbors (PAMN)
With item promotion in top-K recommendation as the attack objective, PAMN effectively boosts target items' ranks with several adversaries.
With the vulnerabilities of DecRecs uncovered, a dedicated defensive mechanism based on user-level gradient clipping with sparsified updating is proposed.
arXiv Detail & Related papers (2024-04-01T15:30:02Z) - Poisoning Federated Recommender Systems with Fake Users [48.70867241987739]
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
arXiv Detail & Related papers (2024-02-18T16:34:12Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks [48.911832772464145]
Contrastive learning (CL) has recently gained prominence in the domain of recommender systems.
This paper identifies a vulnerability of CL-based recommender systems that they are more susceptible to poisoning attacks aiming to promote individual items.
arXiv Detail & Related papers (2023-11-30T04:25:28Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Debiasing Learning for Membership Inference Attacks Against Recommender
Systems [79.48353547307887]
Learned recommender systems may inadvertently leak information about their training data, leading to privacy violations.
We investigate privacy threats faced by recommender systems through the lens of membership inference.
We propose a Debiasing Learning for Membership Inference Attacks against recommender systems (DL-MIA) framework that has four main components.
arXiv Detail & Related papers (2022-06-24T17:57:34Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - Membership Inference Attacks Against Recommender Systems [33.66394989281801]
We make the first attempt on quantifying the privacy leakage of recommender systems through the lens of membership inference.
Our attack is on the user-level but not on the data sample-level.
A shadow recommender is established to derive the labeled training data for training the attack model.
arXiv Detail & Related papers (2021-09-16T15:19:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.