Poisoning Decentralized Collaborative Recommender System and Its Countermeasures
- URL: http://arxiv.org/abs/2404.01177v1
- Date: Mon, 1 Apr 2024 15:30:02 GMT
- Title: Poisoning Decentralized Collaborative Recommender System and Its Countermeasures
- Authors: Ruiqi Zheng, Liang Qu, Tong Chen, Kai Zheng, Yuhui Shi, Hongzhi Yin,
- Abstract summary: We present a novel attack method named Poisoning with Adaptive Malicious Neighbors (PAMN)
With item promotion in top-K recommendation as the attack objective, PAMN effectively boosts target items' ranks with several adversaries.
With the vulnerabilities of DecRecs uncovered, a dedicated defensive mechanism based on user-level gradient clipping with sparsified updating is proposed.
- Score: 37.205493894131635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To make room for privacy and efficiency, the deployment of many recommender systems is experiencing a shift from central servers to personal devices, where the federated recommender systems (FedRecs) and decentralized collaborative recommender systems (DecRecs) are arguably the two most representative paradigms. While both leverage knowledge (e.g., gradients) sharing to facilitate learning local models, FedRecs rely on a central server to coordinate the optimization process, yet in DecRecs, the knowledge sharing directly happens between clients. Knowledge sharing also opens a backdoor for model poisoning attacks, where adversaries disguise themselves as benign clients and disseminate polluted knowledge to achieve malicious goals like promoting an item's exposure rate. Although research on such poisoning attacks provides valuable insights into finding security loopholes and corresponding countermeasures, existing attacks mostly focus on FedRecs, and are either inapplicable or ineffective for DecRecs. Compared with FedRecs where the tampered information can be universally distributed to all clients once uploaded to the cloud, each adversary in DecRecs can only communicate with neighbor clients of a small size, confining its impact to a limited range. To fill the gap, we present a novel attack method named Poisoning with Adaptive Malicious Neighbors (PAMN). With item promotion in top-K recommendation as the attack objective, PAMN effectively boosts target items' ranks with several adversaries that emulate benign clients and transfers adaptively crafted gradients conditioned on each adversary's neighbors. Moreover, with the vulnerabilities of DecRecs uncovered, a dedicated defensive mechanism based on user-level gradient clipping with sparsified updating is proposed. Extensive experiments demonstrate the effectiveness of the poisoning attack and the robustness of our defensive mechanism.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Securing NextG Systems against Poisoning Attacks on Federated Learning:
A Game-Theoretic Solution [9.800359613640763]
This paper studies the poisoning attack and defense interactions in a federated learning (FL) system.
FL collectively trains a global model without the need for clients to exchange their data samples.
The presence of malicious clients introduces the risk of poisoning the training data to manipulate the global model through falsified local model exchanges.
arXiv Detail & Related papers (2023-12-28T17:52:21Z) - Incentivized Communication for Federated Bandits [67.4682056391551]
We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
arXiv Detail & Related papers (2023-09-21T00:59:20Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated Learning is designed to address privacy concerns in learning models.
New distributed paradigm safeguards data privacy but differentiates the attack surface due to the server's inaccessibility to local datasets.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - FedRecAttack: Model Poisoning Attack to Federated Recommendation [5.308983430479344]
Federated Recommendation (FR) has received considerable popularity and attention in the past few years.
In this paper we present FedRecAttack, a model poisoning attack to FR aiming to raise the exposure ratio of target items.
In most recommendation scenarios, apart from private user-item interactions (e.g., clicks, watches and purchases), some interactions are public.
Motivated by this point, in FedRecAttack we make use of the public interactions to approximate users' feature vectors.
arXiv Detail & Related papers (2022-04-01T05:18:47Z) - PipAttack: Poisoning Federated Recommender Systems forManipulating Item
Promotion [58.870444954499014]
A common practice is to subsume recommender systems under the decentralized federated learning paradigm.
We present a systematic approach to backdooring federated recommender systems for targeted item promotion.
arXiv Detail & Related papers (2021-10-21T06:48:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.