Noise-aware Client Selection for carbon-efficient Federated Learning via Gradient Norm Thresholding
- URL: http://arxiv.org/abs/2603.04194v1
- Date: Wed, 04 Mar 2026 15:43:48 GMT
- Title: Noise-aware Client Selection for carbon-efficient Federated Learning via Gradient Norm Thresholding
- Authors: Patrick Wilhelm, Inese Yilmaz, Odej Kao,
- Abstract summary: We introduce a modular approach on top to state-of-the-art client selection strategies for carbon-efficient Federated Learning.<n>Our method enhances robustness by incorporating a noisy client data filtering, improving both model performance and sustainability.
- Score: 1.3585661787562995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training large-scale Neural Networks requires substantial computational power and energy. Federated Learning enables distributed model training across geospatially distributed data centers, leveraging renewable energy sources to reduce the carbon footprint of AI training. Various client selection strategies have been developed to align the volatility of renewable energy with stable and fair model training in a federated system. However, due to the privacy-preserving nature of Federated Learning, the quality of data on client devices remains unknown, posing challenges for effective model training. In this paper, we introduce a modular approach on top to state-of-the-art client selection strategies for carbon-efficient Federated Learning. Our method enhances robustness by incorporating a noisy client data filtering, improving both model performance and sustainability in scenarios with unknown data quality. Additionally, we explore the impact of carbon budgets on model convergence, balancing efficiency and sustainability. Through extensive evaluations, we demonstrate that modern client selection strategies based on local client loss tend to select clients with noisy data, ultimately degrading model performance. To address this, we propose a gradient norm thresholding mechanism using probing rounds for more effective client selection and noise detection, contributing to the practical deployment of carbon-efficient Federated Learning.
Related papers
- FedCCA: Client-Centric Adaptation against Data Heterogeneity in Federated Learning on IoT Devices [16.902104043318975]
Client-Centric Adaptation federated learning (FedCCA) is an algorithm that optimally utilizes client-specific knowledge to learn a unique model for each client.<n>We conduct extensive experiments on diverse datasets to assess the efficacy of FedCCA.
arXiv Detail & Related papers (2026-01-25T06:01:19Z) - Decentralized Dynamic Cooperation of Personalized Models for Federated Continual Learning [50.56947843548702]
We propose a decentralized dynamic cooperation framework for Federated continual learning.<n>Clients establish dynamic cooperative learning coalitions to balance the acquisition of new knowledge and the retention of prior learning.<n>We also propose a merge-blocking algorithm and a dynamic cooperative evolution algorithm to achieve cooperative and dynamic equilibrium.
arXiv Detail & Related papers (2025-09-28T06:53:23Z) - Robust Asymmetric Heterogeneous Federated Learning with Corrupted Clients [60.22876915395139]
This paper studies a challenging robust federated learning task with model heterogeneous and data corrupted clients.<n>Data corruption is unavoidable due to factors such as random noise, compression artifacts, or environmental conditions in real-world deployment.<n>We propose a novel Robust Asymmetric Heterogeneous Federated Learning framework to address these issues.
arXiv Detail & Related papers (2025-03-12T09:52:04Z) - HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast [10.652998357266934]
We propose a system heterogeneous federation method based on data-free knowledge distillation and two-way contrast (HFedCKD)<n>HFedCKD effectively alleviates the knowledge offset caused by a low participation rate under data-free knowledge distillation and improves the performance and stability of the model.<n>We conduct extensive experiments on image and IoT datasets to comprehensively evaluate and verify the generalization and robustness of the proposed HFedCKD framework.
arXiv Detail & Related papers (2025-03-09T08:32:57Z) - Adversarial Federated Consensus Learning for Surface Defect Classification Under Data Heterogeneity in IIoT [8.48069043458347]
It's difficult to collect and centralize sufficient training data from various entities in Industrial Internet of Things (IIoT)
Federated learning (FL) provides a solution by enabling collaborative global model training across clients.
We propose a novel personalized FL approach, named Adversarial Federated Consensus Learning (AFedCL)
arXiv Detail & Related papers (2024-09-24T03:59:32Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - FedAA: A Reinforcement Learning Perspective on Adaptive Aggregation for Fair and Robust Federated Learning [5.622065847054885]
Federated Learning (FL) has emerged as a promising approach for privacy-preserving model training across decentralized devices.<n>We introduce a novel method called textbfFedAA, which optimize client contributions via textbfAdaptive textbfAggregation to enhance model robustness against malicious clients.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Federated Learning While Providing Model as a Service: Joint Training
and Inference Optimization [30.305956110710266]
Federated learning is beneficial for enabling the training of models across distributed clients.
Existing work has overlooked the coexistence of model training and inference under clients' limited resources.
This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients.
arXiv Detail & Related papers (2023-12-20T09:27:09Z) - CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data
Centers [18.54380015603228]
Training large-scale artificial intelligence (AI) models demands significant computational power and energy, leading to increased carbon footprint with potential environmental repercussions.
This paper delves into the challenges of training AI models across geographically distributed (geo-distributed) data centers, emphasizing the balance between learning performance and carbon footprint.
We propose a new framework called CAFE (short for Carbon-Aware Federated Learning) to optimize training within a fixed carbon footprint budget.
arXiv Detail & Related papers (2023-11-06T23:59:22Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.