Analytic Federated Learning
- URL: http://arxiv.org/abs/2405.16240v1
- Date: Sat, 25 May 2024 13:58:38 GMT
- Title: Analytic Federated Learning
- Authors: Huiping Zhuang, Run He, Kai Tong, Di Fang, Han Sun, Haoran Li, Tianyi Chen, Ziqian Zeng,
- Abstract summary: We introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) community.
Our AFL draws inspiration from analytic learning -- a gradient-free technique that trains neural networks with analytical solutions in one epoch.
We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients.
- Score: 34.15482252496494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce analytic federated learning (AFL), a new training paradigm that brings analytical (i.e., closed-form) solutions to the federated learning (FL) community. Our AFL draws inspiration from analytic learning -- a gradient-free technique that trains neural networks with analytical solutions in one epoch. In the local client training stage, the AFL facilitates a one-epoch training, eliminating the necessity for multi-epoch updates. In the aggregation stage, we derive an absolute aggregation (AA) law. This AA law allows a single-round aggregation, removing the need for multiple aggregation rounds. More importantly, the AFL exhibits a \textit{weight-invariant} property, meaning that regardless of how the full dataset is distributed among clients, the aggregated result remains identical. This could spawn various potentials, such as data heterogeneity invariance, client-number invariance, absolute convergence, and being hyperparameter-free (our AFL is the first hyperparameter-free method in FL history). We conduct experiments across various FL settings including extremely non-IID ones, and scenarios with a large number of clients (e.g., $\ge 1000$). In all these settings, our AFL constantly performs competitively while existing FL techniques encounter various obstacles. Code is available at \url{https://github.com/ZHUANGHP/Analytic-federated-learning}
Related papers
- Exploiting Label Skews in Federated Learning with Model Concatenation [39.38427550571378]
Federated Learning (FL) has emerged as a promising solution to perform deep learning on different data owners without exchanging raw data.
Among different non-IID types, label skews have been challenging and common in image classification and other tasks.
We propose FedConcat, a simple and effective approach that degrades these local models as the base of the global model.
arXiv Detail & Related papers (2023-12-11T10:44:52Z) - ALI-DPFL: Differentially Private Federated Learning with Adaptive Local Iterations [26.310416723272184]
Federated Learning (FL) is a distributed machine learning technique that allows model training among multiple devices or organizations by sharing training parameters instead of raw data.
adversaries can still infer individual information through inference attacks on these training parameters. Differential Privacy (DP) has been widely used in FL to prevent such attacks.
We consider differentially private federated learning in a resource-constrained scenario, where both privacy budget and communication rounds are constrained.
arXiv Detail & Related papers (2023-08-21T04:09:59Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Federated Adversarial Learning: A Framework with Convergence Analysis [28.136498729360504]
Federated learning (FL) is a trending training paradigm to utilize decentralized training data.
FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2022-08-07T04:17:34Z) - Killing Two Birds with One Stone:Efficient and Robust Training of Face
Recognition CNNs by Partial FC [66.71660672526349]
We propose a sparsely updating variant of the Fully Connected (FC) layer, named Partial FC (PFC)
In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.
The computing requirement, the probability of inter-class conflict, and the frequency of passive update on tail class centers, are dramatically reduced.
arXiv Detail & Related papers (2022-03-28T14:33:21Z) - A Novel Optimized Asynchronous Federated Learning Framework [1.7541806468876109]
This paper proposes a novel Asynchronous Federated Learning framework VAFL.
VAFL can reduce the communication times about 51.02% with 48.23% average communication compression rate.
arXiv Detail & Related papers (2021-11-18T02:52:49Z) - Anarchic Federated Learning [9.440407984695904]
We propose a new paradigm in federated learning called Anarchic Federated Learning'' (AFL)
In AFL, each worker has complete freedom to choose i) when to participate in FL, and ii) the number of local steps to perform in each round based on its current situation.
We propose two Anarchic FedAvg-like algorithms with two-sided learning rates for both cross-device and cross-silo settings.
arXiv Detail & Related papers (2021-08-23T00:38:37Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - Improving Semi-supervised Federated Learning by Reducing the Gradient
Diversity of Models [67.66144604972052]
Federated learning (FL) is a promising way to use the computing power of mobile devices while maintaining privacy of users.
We show that a critical issue that affects the test accuracy is the large gradient diversity of the models from different users.
We propose a novel grouping-based model averaging method to replace the FedAvg averaging method.
arXiv Detail & Related papers (2020-08-26T03:36:07Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.