Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective
- URL: http://arxiv.org/abs/2501.03301v2
- Date: Wed, 08 Jan 2025 11:47:25 GMT
- Title: Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective
- Authors: Zhongjian Zhang, Mengmei Zhang, Xiao Wang, Lingjuan Lyu, Bo Yan, Junping Du, Chuan Shi,
- Abstract summary: federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively.
FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL.
In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit.
We propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability.
- Score: 65.65471972217814
- License:
- Abstract: To preserve user privacy in recommender systems, federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively. Unlike FL, FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL. Recently, as an essential principle of FL, model security has received increasing attention, especially for Byzantine attacks, where malicious clients can send arbitrary updates. The problem of exploring the Byzantine robustness of FR is particularly critical since in the domains applying FR, e.g., e-commerce, malicious clients can be injected easily by registering new accounts. However, existing Byzantine works neglect the unique sparse aggregation of FR, making them unsuitable for our problem. Thus, we make the first effort to investigate Byzantine attacks on FR from the perspective of sparse aggregation, which is non-trivial: it is not clear how to define Byzantine robustness under sparse aggregations and design Byzantine attacks under limited knowledge/capability. In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit. Then we propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability. Extensive experimental results demonstrate that Spattack can effectively prevent convergence and even break down defenses under a few malicious clients, raising alarms for securing FR systems.
Related papers
- Do We Really Need to Design New Byzantine-robust Aggregation Rules? [9.709243052112921]
Federated learning (FL) allows multiple clients to collaboratively train a global machine learning model through a server.
The decentralized aspect of FL makes it susceptible to poisoning attacks, where malicious clients can manipulate the global model.
We present FoundationFL, a novel defense mechanism against poisoning attacks.
arXiv Detail & Related papers (2025-01-29T02:28:03Z) - FedCAP: Robust Federated Learning via Customized Aggregation and Personalization [13.17735010891312]
Federated learning (FL) has been applied to various privacy-preserving scenarios.
We propose FedCAP, a robust FL framework against both data heterogeneity and Byzantine attacks.
We show that FedCAP performs well in several non-IID settings and shows strong robustness under a series of poisoning attacks.
arXiv Detail & Related papers (2024-10-16T23:01:22Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - An Experimental Study of Byzantine-Robust Aggregation Schemes in
Federated Learning [4.627944480085717]
Byzantine-robust federated learning aims at mitigating Byzantine failures during the federated training process.
Several robust aggregation schemes have been proposed to defend against malicious updates from Byzantine clients.
We conduct an experimental study of Byzantine-robust aggregation schemes under different attacks using two popular algorithms in federated learning.
arXiv Detail & Related papers (2023-02-14T16:36:38Z) - A Secure Federated Learning Framework for Residential Short Term Load
Forecasting [1.1254693939127909]
Federated Learning (FL) is a machine learning alternative which enables collaborative learning of a model without exposing private raw data for short term load forecasting.
Standard FL is still vulnerable to an intractable cyber threat known as Byzantine attack carried out by faulty and/or malicious clients.
We develop a state-of-the-art differentially private secured FL-based framework that ensures the privacy of the individual smart meter's data while protect the security of FL models and architecture.
arXiv Detail & Related papers (2022-09-29T04:36:16Z) - MUDGUARD: Taming Malicious Majorities in Federated Learning using
Privacy-Preserving Byzantine-Robust Clustering [34.429892915267686]
Byzantine-robust Federated Learning (FL) aims to counter malicious clients and train an accurate global model while maintaining an extremely low attack success rate.
Most existing systems, however, are only robust when most of the clients are honest.
We propose a novel Byzantine-robust and privacy-preserving FL system, called MUDGUARD, that can operate under malicious minority emphor majority in both the server and client sides.
arXiv Detail & Related papers (2022-08-22T09:17:58Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Learning from History for Byzantine Robust Optimization [52.68913869776858]
Byzantine robustness has received significant attention recently given its importance for distributed learning.
We show that most existing robust aggregation rules may not converge even in the absence of any Byzantine attackers.
arXiv Detail & Related papers (2020-12-18T16:22:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.