Multi-Task Model Personalization for Federated Supervised SVM in
Heterogeneous Networks
- URL: http://arxiv.org/abs/2303.10254v2
- Date: Sat, 1 Apr 2023 16:18:28 GMT
- Title: Multi-Task Model Personalization for Federated Supervised SVM in
Heterogeneous Networks
- Authors: Aleksei Ponomarenko-Timofeev, Olga Galinina, Ravikumar Balakrishnan,
Nageen Himayat, Sergey Andreev, and Yevgeni Koucheryavy
- Abstract summary: Federated systems enable collaborative training on highly heterogeneous data through model personalization.
To accelerate the learning procedure for diverse participants in a multi-task federated setting, more efficient and robust methods need to be developed.
In this paper, we design an efficient iterative distributed method based on the alternating direction method of multipliers (ADMM) for support vector machines (SVMs)
The proposed method utilizes efficient computations and model exchange in a network of heterogeneous nodes and allows personalization of the learning model in the presence of non-i.i.d. data.
- Score: 10.169907307499916
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated systems enable collaborative training on highly heterogeneous data
through model personalization, which can be facilitated by employing multi-task
learning algorithms. However, significant variation in device computing
capabilities may result in substantial degradation in the convergence rate of
training. To accelerate the learning procedure for diverse participants in a
multi-task federated setting, more efficient and robust methods need to be
developed. In this paper, we design an efficient iterative distributed method
based on the alternating direction method of multipliers (ADMM) for support
vector machines (SVMs), which tackles federated classification and regression.
The proposed method utilizes efficient computations and model exchange in a
network of heterogeneous nodes and allows personalization of the learning model
in the presence of non-i.i.d. data. To further enhance privacy, we introduce a
random mask procedure that helps avoid data inversion. Finally, we analyze the
impact of the proposed privacy mechanisms and participant hardware and data
heterogeneity on the system performance.
Related papers
- FedECADO: A Dynamical System Model of Federated Learning [15.425099636035108]
Federated learning harnesses the power of distributed optimization to train a unified machine learning model across separate clients.
This work proposes FedECADO, a new algorithm inspired by a dynamical system representation of the federated learning process.
Compared to prominent techniques, including FedProx and FedNova, FedECADO achieves higher classification accuracies in numerous heterogeneous scenarios.
arXiv Detail & Related papers (2024-10-13T17:26:43Z) - FedShift: Tackling Dual Heterogeneity Problem of Federated Learning via
Weight Shift Aggregation [6.3842184099869295]
Federated Learning (FL) offers a compelling method for training machine learning models with a focus on preserving data privacy.
The presence of system heterogeneity and statistical heterogeneity, recognized challenges in FL, arises from the diversity of client hardware, network, and dataset distribution.
This paper introduces FedShift, a novel algorithm designed to enhance both the training speed and the models' accuracy in a dual heterogeneous scenario.
arXiv Detail & Related papers (2024-02-02T00:03:51Z) - Efficient Cluster Selection for Personalized Federated Learning: A
Multi-Armed Bandit Approach [2.5477011559292175]
Federated learning (FL) offers a decentralized training approach for machine learning models, prioritizing data privacy.
In this paper, we introduce a dynamic Upper Confidence Bound (dUCB) algorithm inspired by the multi-armed bandit (MAB) approach.
arXiv Detail & Related papers (2023-10-29T16:46:50Z) - FedSym: Unleashing the Power of Entropy for Benchmarking the Algorithms
for Federated Learning [1.4656078321003647]
Federated learning (FL) is a decentralized machine learning approach where independent learners process data privately.
We study the currently popular data partitioning techniques and visualize their main disadvantages.
We propose a method that leverages entropy and symmetry to construct 'the most challenging' and controllable data distributions.
arXiv Detail & Related papers (2023-10-11T18:39:08Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.