Robust Federated Learning Through Representation Matching and Adaptive
Hyper-parameters
- URL: http://arxiv.org/abs/1912.13075v1
- Date: Mon, 30 Dec 2019 20:19:20 GMT
- Title: Robust Federated Learning Through Representation Matching and Adaptive
Hyper-parameters
- Authors: Hesham Mostafa
- Abstract summary: Federated learning is a distributed, privacy-aware learning scenario which trains a single model on data belonging to several clients.
Current federated learning methods struggle in cases with heterogeneous client-side data distributions.
We propose a novel representation matching scheme that reduces the divergence of local models.
- Score: 5.319361976450981
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed, privacy-aware learning scenario which
trains a single model on data belonging to several clients. Each client trains
a local model on its data and the local models are then aggregated by a central
party. Current federated learning methods struggle in cases with heterogeneous
client-side data distributions which can quickly lead to divergent local models
and a collapse in performance. Careful hyper-parameter tuning is particularly
important in these cases but traditional automated hyper-parameter tuning
methods would require several training trials which is often impractical in a
federated learning setting. We describe a two-pronged solution to the issues of
robustness and hyper-parameter tuning in federated learning settings. We
propose a novel representation matching scheme that reduces the divergence of
local models by ensuring the feature representations in the global (aggregate)
model can be derived from the locally learned representations. We also propose
an online hyper-parameter tuning scheme which uses an online version of the
REINFORCE algorithm to find a hyper-parameter distribution that maximizes the
expected improvements in training loss. We show on several benchmarks that our
two-part scheme of local representation matching and global adaptive
hyper-parameters significantly improves performance and training robustness.
Related papers
- PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees [18.24213566328972]
Decentralized decentralized learning (DFL) captures FL settings where both (i) model updates and (ii) model aggregations are carried out by the clients without a central server.
DSpodFL consistently achieves speeds compared with baselines under various system settings.
arXiv Detail & Related papers (2024-02-05T19:02:19Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Cross-Silo Federated Learning Across Divergent Domains with Iterative Parameter Alignment [4.95475852994362]
Federated learning is a method for training a machine learning model across remote clients.
We reformulate the typical federated learning setup to learn N models optimized for a common objective.
We find that the technique achieves competitive results on a variety of data partitions compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-11-08T16:42:14Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Adapter-based Selective Knowledge Distillation for Federated
Multi-domain Meeting Summarization [36.916155654985936]
Meeting summarization has emerged as a promising technique for providing users with condensed summaries.
We propose adapter-based Federated Selective Knowledge Distillation (AdaFedSelecKD) for training performant client models.
arXiv Detail & Related papers (2023-08-07T03:34:01Z) - Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs [19.130197923214123]
Learning an effective global model on private and decentralized datasets has become an increasingly important challenge of machine learning.
We propose a feature fusion approach that extracts local representations from local models and incorporates them into a global representation that improves the prediction performance.
This paper presents solutions to these problems and demonstrates them in real-world applications on time series data such as power grids and traffic networks.
arXiv Detail & Related papers (2023-06-02T02:24:27Z) - Towards More Suitable Personalization in Federated Learning via
Decentralized Partial Model Training [67.67045085186797]
Almost all existing systems have to face large communication burdens if the central FL server fails.
It personalizes the "right" in the deep models by alternately updating the shared and personal parameters.
To further promote the shared parameters aggregation process, we propose DFed integrating the local Sharpness Miniization.
arXiv Detail & Related papers (2023-05-24T13:52:18Z) - ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning [24.288233074516455]
We propose a novel adaptive differentiable sampling framework (ADDS) for robust and communication-efficient multi-party learning.
The proposed framework significantly reduces local computation and communication costs while speeding up the central model convergence.
arXiv Detail & Related papers (2021-10-29T03:35:15Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.