FedDUAL: A Dual-Strategy with Adaptive Loss and Dynamic Aggregation for Mitigating Data Heterogeneity in Federated Learning
- URL: http://arxiv.org/abs/2412.04416v1
- Date: Thu, 05 Dec 2024 18:42:29 GMT
- Title: FedDUAL: A Dual-Strategy with Adaptive Loss and Dynamic Aggregation for Mitigating Data Heterogeneity in Federated Learning
- Authors: Pranab Sahoo, Ashutosh Tripathi, Sriparna Saha, Samrat Mondal,
- Abstract summary: Federated Learning (FL) combines locally optimized models from various clients into a unified global model.<n>FL encounters significant challenges such as performance degradation, slower convergence, and reduced robustness of the global model.<n>We introduce an innovative dual-strategy approach designed to effectively resolve these issues.
- Score: 12.307490659840845
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) marks a transformative approach to distributed model training by combining locally optimized models from various clients into a unified global model. While FL preserves data privacy by eliminating centralized storage, it encounters significant challenges such as performance degradation, slower convergence, and reduced robustness of the global model due to the heterogeneity in client data distributions. Among the various forms of data heterogeneity, label skew emerges as a particularly formidable and prevalent issue, especially in domains such as image classification. To address these challenges, we begin with comprehensive experiments to pinpoint the underlying issues in the FL training process. Based on our findings, we then introduce an innovative dual-strategy approach designed to effectively resolve these issues. First, we introduce an adaptive loss function for client-side training, meticulously crafted to preserve previously acquired knowledge while maintaining an optimal equilibrium between local optimization and global model coherence. Secondly, we develop a dynamic aggregation strategy for aggregating client models at the server. This approach adapts to each client's unique learning patterns, effectively addressing the challenges of diverse data across the network. Our comprehensive evaluation, conducted across three diverse real-world datasets, coupled with theoretical convergence guarantees, demonstrates the superior efficacy of our method compared to several established state-of-the-art approaches.
Related papers
- Robust Asymmetric Heterogeneous Federated Learning with Corrupted Clients [60.22876915395139]
This paper studies a challenging robust federated learning task with model heterogeneous and data corrupted clients.
Data corruption is unavoidable due to factors such as random noise, compression artifacts, or environmental conditions in real-world deployment.
We propose a novel Robust Asymmetric Heterogeneous Federated Learning framework to address these issues.
arXiv Detail & Related papers (2025-03-12T09:52:04Z) - Asynchronous Personalized Federated Learning through Global Memorization [16.630360485032163]
Federated Learning offers a privacy preserving solution by enabling collaborative model training across decentralized devices without centralizing sensitive data.
We propose the Asynchronous Personalized Federated Learning framework, which empowers clients to develop personalized models using a server side semantic generator.
This generator, trained via data free knowledge transfer under global model supervision, enhances client data diversity by producing both seen and unseen samples.
To counter the risks of synthetic data impairing training, we introduce a decoupled model method, ensuring robust personalization.
arXiv Detail & Related papers (2025-03-01T09:00:33Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT [8.48069043458347]
It's difficult to collect and centralize sufficient training data from various entities in Industrial Internet of Things (IIoT)
Federated learning (FL) provides a solution by enabling collaborative global model training across clients.
We propose a novel personalized FL approach, named Adversarial Federated Consensus Learning (AFedCL)
arXiv Detail & Related papers (2024-09-24T03:59:32Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors [21.931436901703634]
Conventional Federated Learning (FL) involves collaborative training of a global model while maintaining user data privacy.
One of its branches, decentralized FL, is a serverless network that allows clients to own and optimize different local models separately.
We propose a novel Decentralized FL technique by introducing Synthetic Anchors, dubbed as DeSA.
arXiv Detail & Related papers (2024-05-19T11:36:45Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Adaptive Self-Distillation for Minimizing Client Drift in Heterogeneous
Federated Learning [9.975023463908496]
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data.
We propose a novel regularization technique based on adaptive self-distillation (ASD) for training models on the client side.
Our regularization scheme adaptively adjusts to the client's training data based on the global model entropy and the client's label distribution.
arXiv Detail & Related papers (2023-05-31T07:00:42Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.