FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates
- URL: http://arxiv.org/abs/2503.07216v2
- Date: Tue, 11 Mar 2025 12:49:15 GMT
- Title: FedRand: Enhancing Privacy in Federated Learning with Randomized LoRA Subparameter Updates
- Authors: Sangwoo Park, Seanie Lee, Byungjoo Kim, Sung Ju Hwang,
- Abstract summary: Federated Learning (FL) is a widely used framework for training models in a decentralized manner.<n>We propose the FedRand framework, which avoids disclosing the full set of client parameters.<n>We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines.
- Score: 58.18162789618869
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a widely used framework for training models in a decentralized manner, ensuring that the central server does not have direct access to data from local clients. However, this approach may still fail to fully preserve data privacy, as models from local clients are exposed to the central server during the aggregation process. This issue becomes even more critical when training vision-language models (VLMs) with FL, as VLMs can easily memorize training data instances, making them vulnerable to membership inference attacks (MIAs). To address this challenge, we propose the FedRand framework, which avoids disclosing the full set of client parameters. In this framework, each client randomly selects subparameters of Low-Rank Adaptation (LoRA) from the server and keeps the remaining counterparts of the LoRA weights as private parameters. After training both parameters on the client's private dataset, only the non-private client parameters are sent back to the server for aggregation. This approach mitigates the risk of exposing client-side VLM parameters, thereby enhancing data privacy. We empirically validate that FedRand improves robustness against MIAs compared to relevant baselines while achieving accuracy comparable to methods that communicate full LoRA parameters across several benchmark datasets.
Related papers
- Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)
FedE4RAG facilitates collaborative training of client-side RAG retrieval models.
We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Efficient Federated Unlearning under Plausible Deniability [1.795561427808824]
Machine unlearning addresses this by modifying the ML parameters in order to forget the influence of a specific data point on its weights.
Recent literature has highlighted that the contribution from data point(s) can be forged with some other data points in the dataset with probability close to one.
This paper introduces an efficient way to achieve federated unlearning, by employing a privacy model which allows the FL server to plausibly deny the client's participation.
arXiv Detail & Related papers (2024-10-13T18:08:24Z) - Safely Learning with Private Data: A Federated Learning Framework for Large Language Model [3.1077263218029105]
Federated learning (FL) is an ideal solution for training models with distributed private data.<n>Traditional frameworks like FedAvg are unsuitable for large language models (LLM)<n>We propose FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks.
arXiv Detail & Related papers (2024-06-21T06:43:15Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.