Decentralized Federated Learning through Proxy Model Sharing
- URL: http://arxiv.org/abs/2111.11343v2
- Date: Tue, 23 May 2023 02:18:27 GMT
- Title: Decentralized Federated Learning through Proxy Model Sharing
- Authors: Shivam Kalra, Junfeng Wen, Jesse C. Cresswell, Maksims Volkovs, Hamid
R. Tizhoosh
- Abstract summary: We propose a communication-efficient scheme for decentralized federated learning called ProxyFL.
We show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.
- Score: 15.749416770494708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Institutions in highly regulated domains such as finance and healthcare often
have restrictive rules around data sharing. Federated learning is a distributed
learning framework that enables multi-institutional collaborations on
decentralized data with improved protection for each collaborator's data
privacy. In this paper, we propose a communication-efficient scheme for
decentralized federated learning called ProxyFL, or proxy-based federated
learning. Each participant in ProxyFL maintains two models, a private model,
and a publicly shared proxy model designed to protect the participant's
privacy. Proxy models allow efficient information exchange among participants
without the need of a centralized server. The proposed method eliminates a
significant limitation of canonical federated learning by allowing model
heterogeneity; each participant can have a private model with any architecture.
Furthermore, our protocol for communication by proxy leads to stronger privacy
guarantees using differential privacy analysis. Experiments on popular image
datasets, and a cancer diagnostic problem using high-quality gigapixel
histology whole slide images, show that ProxyFL can outperform existing
alternatives with much less communication overhead and stronger privacy.
Related papers
- TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning [16.898842295300067]
Federated learning is a computing paradigm that enhances privacy by enabling multiple parties to collaboratively train a machine learning model without revealing personal data.
Traditional federated learning platforms are unable to ensure privacy due to privacy leaks caused by the interchange of gradients.
This paper proposes TAPFed, an approach for achieving privacy-preserving federated learning in the context of multiple decentralized aggregators with malicious actors.
arXiv Detail & Related papers (2025-01-09T08:24:10Z) - Large Language Model Federated Learning with Blockchain and Unlearning for Cross-Organizational Collaboration [18.837908762300493]
Large language models (LLMs) have transformed the way computers understand and process human language, but using them effectively across different organizations remains difficult.
We propose a hybrid blockchain-based federated learning framework that combines public and private blockchain architectures with multi-agent reinforcement learning.
Our framework enables transparent sharing of model update through the public blockchain while protecting sensitive computations in private chains.
arXiv Detail & Related papers (2024-12-18T06:56:09Z) - Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Privacy-preserving Decentralized Aggregation for Federated Learning [3.9323226496740733]
Federated learning is a promising framework for learning over decentralized data spanning multiple regions.
We develop a privacy-preserving decentralized aggregation protocol for federated learning.
We evaluate our algorithm on image classification and next-word prediction applications over benchmark datasets with 9 and 15 distributed sites.
arXiv Detail & Related papers (2020-12-13T23:45:42Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.