Improving Federated Aggregation with Deep Unfolding Networks
- URL: http://arxiv.org/abs/2306.17362v1
- Date: Fri, 30 Jun 2023 01:51:22 GMT
- Title: Improving Federated Aggregation with Deep Unfolding Networks
- Authors: Shanika I Nanayakkara, Shiva Raj Pokhrel, Gang Li
- Abstract summary: Federated learning (FL) is negatively affected by device differences and statistical characteristics between participating clients.
We introduce a deep unfolding network (DUN)-based technique that learns adaptive weights that unbiasedly ameliorate the adverse impacts of heterogeneity.
The proposed method demonstrates impressive accuracy and quality-aware aggregation.
- Score: 19.836640510604422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of Federated learning (FL) is negatively affected by device
differences and statistical characteristics between participating clients. To
address this issue, we introduce a deep unfolding network (DUN)-based technique
that learns adaptive weights that unbiasedly ameliorate the adverse impacts of
heterogeneity. The proposed method demonstrates impressive accuracy and
quality-aware aggregation. Furthermore, it evaluated the best-weighted
normalization approach to define less computational power on the aggregation
method. The numerical experiments in this study demonstrate the effectiveness
of this approach and provide insights into the interpretability of the unbiased
weights learned.
By incorporating unbiased weights into the model, the proposed approach
effectively addresses quality-aware aggregation under the heterogeneity of the
participating clients and the FL environment. Codes and details are
\href{https://github.com/shanikairoshi/Improved_DUN_basedFL_Aggregation}{here}.
Related papers
- FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.
Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.
We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.
We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.
Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - Federated Testing (FedTest): A New Scheme to Enhance Convergence and Mitigate Adversarial Attacks in Federating Learning [35.14491996649841]
We introduce a novel federated learning framework, which we call federated testing for federated learning (FedTest)
In FedTest, the local data of a specific user is used to train the model of that user and test the models of the other users.
Our numerical results reveal that the proposed method not only accelerates convergence rates but also diminishes the potential influence of malicious users.
arXiv Detail & Related papers (2025-01-19T21:01:13Z) - Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Addressing Data Heterogeneity in Federated Learning with Adaptive Normalization-Free Feature Recalibration [1.33512912917221]
Federated learning is a decentralized collaborative training paradigm that preserves stakeholders' data ownership while improving performance and generalization.
We propose Adaptive Normalization-free Feature Recalibration (ANFR), an architecture-level approach that combines weight standardization and channel attention.
arXiv Detail & Related papers (2024-10-02T20:16:56Z) - Over-the-Air Federated Learning via Weighted Aggregation [9.043019524847491]
This paper introduces a new federated learning scheme that leverages over-the-air computation.
A novel feature of this scheme is the proposal to employ adaptive weights during aggregation.
We provide a mathematical methodology to derive the convergence bound for the proposed scheme.
arXiv Detail & Related papers (2024-09-12T08:07:11Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Enabling Quartile-based Estimated-Mean Gradient Aggregation As Baseline
for Federated Image Classifications [5.5099914877576985]
Federated Learning (FL) has revolutionized how we train deep neural networks by enabling decentralized collaboration while safeguarding sensitive data and improving model performance.
This paper introduces an innovative solution named Estimated Mean Aggregation (EMA) that not only addresses these challenges but also provides a fundamental reference point as a $mathsfbaseline$ for advanced aggregation techniques in FL systems.
arXiv Detail & Related papers (2023-09-21T17:17:28Z) - Reinforcement Federated Learning Method Based on Adaptive OPTICS
Clustering [19.73560248813166]
This paper proposes an adaptive OPTICS clustering algorithm for federated learning.
By perceiving the clustering environment as a Markov decision process, the goal is to find the best parameters of the OPTICS cluster.
The reliability and practicability of this method have been verified on the experimental data, and its effec-tiveness and superiority have been proved.
arXiv Detail & Related papers (2023-06-22T13:11:19Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Deep Unfolding-based Weighted Averaging for Federated Learning in
Heterogeneous Environments [11.023081396326507]
Federated learning is a collaborative model training method that iterates model updates by multiple clients and aggregation of the updates by a central server.
To adjust the aggregation weights, this paper employs deep unfolding, which is known as the parameter tuning method.
The proposed method can handle large-scale learning models with the aid of pretrained models such as it can perform practical real-world tasks.
arXiv Detail & Related papers (2022-12-23T08:20:37Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.