Federated Learning with Reduced Information Leakage and Computation
- URL: http://arxiv.org/abs/2310.06341v1
- Date: Tue, 10 Oct 2023 06:22:06 GMT
- Title: Federated Learning with Reduced Information Leakage and Computation
- Authors: Tongxin Yin, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
- Abstract summary: Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
In this paper, we introduce Upcycled-FL, a novel federated learning framework with first-order approximation applied at every even iteration.
Under this framework, half of the FL updates incur no information leakage and require much less computation.
- Score: 20.005520306964485
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning (FL) is a distributed learning paradigm that allows
multiple decentralized clients to collaboratively learn a common model without
sharing local data. Although local data is not exposed directly, privacy
concerns nonetheless exist as clients' sensitive information can be inferred
from intermediate computations. Moreover, such information leakage accumulates
substantially over time as the same data is repeatedly used during the
iterative learning process. As a result, it can be particularly difficult to
balance the privacy-accuracy trade-off when designing privacy-preserving FL
algorithms. In this paper, we introduce Upcycled-FL, a novel federated learning
framework with first-order approximation applied at every even iteration. Under
this framework, half of the FL updates incur no information leakage and require
much less computation. We first conduct the theoretical analysis on the
convergence (rate) of Upcycled-FL, and then apply perturbation mechanisms to
preserve privacy. Experiments on real-world data show that Upcycled-FL
consistently outperforms existing methods over heterogeneous data, and
significantly improves privacy-accuracy trade-off while reducing 48% of the
training time on average.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedPDC:Federated Learning for Public Dataset Correction [1.5533842336139065]
Federated learning has lower classification accuracy than traditional machine learning in Non-IID scenarios.
New algorithm FedPDC is proposed to optimize the aggregation mode of local models and the loss function of local training.
In many benchmark experiments, FedPDC can effectively improve the accuracy of the global model in the case of extremely unbalanced data distribution.
arXiv Detail & Related papers (2023-02-24T08:09:23Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Towards Federated Learning on Time-Evolving Heterogeneous Data [13.080665001587281]
Federated Learning (FL) is an emerging learning paradigm that preserves privacy by ensuring client data locality on edge devices.
Despite recent research efforts on improving the optimization of heterogeneous data, the impact of time-evolving heterogeneous data in real-world scenarios has not been well studied.
We propose Continual Federated Learning (CFL), a flexible framework, to capture the time-evolving heterogeneity of FL.
arXiv Detail & Related papers (2021-12-25T14:58:52Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.