Global Update Guided Federated Learning
- URL: http://arxiv.org/abs/2204.03920v1
- Date: Fri, 8 Apr 2022 08:36:26 GMT
- Title: Global Update Guided Federated Learning
- Authors: Qilong Wu, Lin Liu, Shibei Xue
- Abstract summary: Federated learning protects data privacy and security by exchanging models instead of data.
We propose global-update-guided federated learning (FedGG), which introduces a model-cosine loss into local objective functions.
Numerical simulations show that FedGG has a significant improvement on model convergence accuracies and speeds.
- Score: 11.731231528534035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning protects data privacy and security by exchanging models
instead of data. However, unbalanced data distributions among participating
clients compromise the accuracy and convergence speed of federated learning
algorithms. To alleviate this problem, unlike previous studies that limit the
distance of updates for local models, we propose global-update-guided federated
learning (FedGG), which introduces a model-cosine loss into local objective
functions, so that local models can fit local data distributions under the
guidance of update directions of global models. Furthermore, considering that
the update direction of a global model is informative in the early stage of
training, we propose adaptive loss weights based on the update distances of
local models. Numerical simulations show that, compared with other advanced
algorithms, FedGG has a significant improvement on model convergence accuracies
and speeds. Additionally, compared with traditional fixed loss weights,
adaptive loss weights enable our algorithm to be more stable and easier to
implement in practice.
Related papers
- Mitigating System Bias in Resource Constrained Asynchronous Federated
Learning Systems [2.8790600498444032]
We propose a dynamic global model aggregation method within Asynchronous Federated Learning (AFL) deployments.
Our method scores and adjusts the weighting of client model updates based on their upload frequency to accommodate differences in device capabilities.
arXiv Detail & Related papers (2024-01-24T10:51:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - FedAgg: Adaptive Federated Learning with Aggregated Gradients [1.5653612447564105]
We propose an adaptive FEDerated learning algorithm called FedAgg to alleviate the divergence between the local and average model parameters and obtain a fast model convergence rate.
We show that our framework is superior to existing state-of-the-art FL strategies for enhancing model performance and accelerating convergence rate under IID and Non-IID datasets.
arXiv Detail & Related papers (2023-03-28T08:07:28Z) - Recursive Euclidean Distance Based Robust Aggregation Technique For
Federated Learning [4.848016645393023]
Federated learning is a solution to data availability and privacy challenges in machine learning.
Malicious users aim to sabotage the collaborative learning process by training the local model with malicious data.
We propose a novel robust aggregation approach based on Euclidean distance calculation.
arXiv Detail & Related papers (2023-03-20T06:48:43Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Revisiting Communication-Efficient Federated Learning with Balanced
Global and Local Updates [14.851898446967672]
We investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations.
Our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
arXiv Detail & Related papers (2022-05-03T13:05:26Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.