Poisoning Federated Recommender Systems with Fake Users
- URL: http://arxiv.org/abs/2402.11637v1
- Date: Sun, 18 Feb 2024 16:34:12 GMT
- Title: Poisoning Federated Recommender Systems with Fake Users
- Authors: Ming Yin, Yichang Xu, Minghong Fang, and Neil Zhenqiang Gong
- Abstract summary: Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks.
We introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item.
Experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen item to a large portion of genuine users.
- Score: 48.70867241987739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated recommendation is a prominent use case within federated learning,
yet it remains susceptible to various attacks, from user to server-side
vulnerabilities. Poisoning attacks are particularly notable among user-side
attacks, as participants upload malicious model updates to deceive the global
model, often intending to promote or demote specific targeted items. This study
investigates strategies for executing promotion attacks in federated
recommender systems.
Current poisoning attacks on federated recommender systems often rely on
additional information, such as the local training data of genuine users or
item popularity. However, such information is challenging for the potential
attacker to obtain. Thus, there is a need to develop an attack that requires no
extra information apart from item embeddings obtained from the server. In this
paper, we introduce a novel fake user based poisoning attack named PoisonFRS to
promote the attacker-chosen targeted item in federated recommender systems
without requiring knowledge about user-item rating data, user attributes, or
the aggregation rule used by the server. Extensive experiments on multiple
real-world datasets demonstrate that PoisonFRS can effectively promote the
attacker-chosen targeted item to a large portion of genuine users and
outperform current benchmarks that rely on additional information about the
system. We further observe that the model updates from both genuine and fake
users are indistinguishable within the latent space.
Related papers
- Improving the Shortest Plank: Vulnerability-Aware Adversarial Training for Robust Recommender System [60.719158008403376]
Vulnerability-aware Adversarial Training (VAT) is designed to defend against poisoning attacks in recommender systems.
VAT employs a novel vulnerability-aware function to estimate users' vulnerability based on the degree to which the system fits them.
arXiv Detail & Related papers (2024-09-26T02:24:03Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - PORE: Provably Robust Recommender Systems against Data Poisoning Attacks [58.26750515059222]
We propose PORE, the first framework to build provably robust recommender systems.
PORE can transform any existing recommender system to be provably robust against untargeted data poisoning attacks.
We prove that PORE still recommends at least $r$ of the $N$ items to the user under any data poisoning attack, where $r$ is a function of the number of fake users in the attack.
arXiv Detail & Related papers (2023-03-26T01:38:11Z) - Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
Attacks in Federated Learning [102.05872020792603]
We propose an attack that anticipates and accounts for the entire federated learning pipeline, including behaviors of other clients.
We show that this new attack is effective in realistic scenarios where the attacker only contributes to a small fraction of randomly sampled rounds.
arXiv Detail & Related papers (2022-10-17T17:59:38Z) - Knowledge-enhanced Black-box Attacks for Recommendations [21.914252071143945]
Deep neural networks-based recommender systems are vulnerable to adversarial attacks.
We propose a knowledge graph-enhanced black-box attacking framework (KGAttack) to effectively learn attacking policies.
Comprehensive experiments on various real-world datasets demonstrate the effectiveness of the proposed attacking framework.
arXiv Detail & Related papers (2022-07-21T04:59:31Z) - Poisoning Deep Learning based Recommender Model in Federated Learning
Scenarios [7.409990425668484]
We design attack approaches targeting deep learning based recommender models in federated learning scenarios.
Our well-designed attacks can effectively poison the target models, and the attack effectiveness sets the state-of-the-art.
arXiv Detail & Related papers (2022-04-26T15:23:05Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z) - Revisiting Adversarially Learned Injection Attacks Against Recommender
Systems [6.920518936054493]
This paper revisits the adversarially-learned injection attack problem.
We show that the exact solution for generating fake users as an optimization problem could lead to a much larger impact.
arXiv Detail & Related papers (2020-08-11T17:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.