Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels
- URL: http://arxiv.org/abs/2208.09378v3
- Date: Fri, 26 May 2023 14:08:35 GMT
- Title: Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels
- Authors: Vasileios Tsouvalas, Aaqib Saeed, Tanir Ozcelebi, Nirvana Meratnia
- Abstract summary: Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized private datasets.
We propose FedLN, a framework to deal with label noise across different FL training stages.
Our evaluation on various publicly available vision and audio datasets demonstrate a 22% improvement on average compared to other existing methods for a label noise level of 60%.
- Score: 3.4620497416430456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed machine learning paradigm that
enables learning models from decentralized private datasets, where the labeling
effort is entrusted to the clients. While most existing FL approaches assume
high-quality labels are readily available on users' devices; in reality, label
noise can naturally occur in FL and is closely related to clients'
characteristics. Due to scarcity of available data and significant label noise
variations among clients in FL, existing state-of-the-art centralized
approaches exhibit unsatisfactory performance, while prior FL studies rely on
excessive on-device computational schemes or additional clean data available on
server. Here, we propose FedLN, a framework to deal with label noise across
different FL training stages; namely, FL initialization, on-device model
training, and server model aggregation, able to accommodate the diverse
computational capabilities of devices in a FL system. Specifically, FedLN
computes per-client noise-level estimation in a single federated round and
improves the models' performance by either correcting or mitigating the effect
of noisy samples. Our evaluation on various publicly available vision and audio
datasets demonstrate a 22% improvement on average compared to other existing
methods for a label noise level of 60%. We further validate the efficiency of
FedLN in human-annotated real-world noisy datasets and report a 4.8% increase
on average in models' recognition performance, highlighting that~\method~can be
useful for improving FL services provided to everyday users.
Related papers
- Federated Learning Client Pruning for Noisy Labels [6.30126491637621]
Federated Learning (FL) enables collaborative model training across decentralized edge devices.
This paper introduces ClipFL, a novel framework addressing noisy labels from a fresh perspective.
It identifies and excludes noisy clients based on their performance on a clean validation dataset.
arXiv Detail & Related papers (2024-11-11T21:46:34Z) - Collaboratively Learning Federated Models from Noisy Decentralized Data [21.3209961590772]
Federated learning (FL) has emerged as a prominent method for collaboratively training machine learning models using local data from edge devices.
We focus on addressing the problem of noisy data in the input space, an under-explored area compared to the label noise.
We propose a noise-aware FL aggregation method, namely Federated Noise-Sifting (FedNS), which can be used as a plug-in approach in conjunction with widely used FL strategies.
arXiv Detail & Related papers (2024-09-03T18:00:51Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy
Labels [99.70895640578816]
Federated learning with noisy labels (F-LNL) aims at seeking an optimal server model via collaborative distributed learning.
We present FedDiv to tackle the challenges of F-LNL. Specifically, we propose a global noise filter called Federated Noise Filter.
arXiv Detail & Related papers (2023-12-19T15:46:47Z) - Learning Cautiously in Federated Learning with Noisy and Heterogeneous
Clients [4.782145666637457]
Federated learning (FL) is a distributed framework for collaboratively training with privacy guarantees.
In real-world scenarios, clients may have Non-IID data (local class imbalance) with poor annotation quality (label noise)
We propose FedCNI without using an additional clean proxy dataset.
It includes a noise-resilient local solver and a robust global aggregator.
arXiv Detail & Related papers (2023-04-06T06:47:14Z) - Quantifying the Impact of Label Noise on Federated Learning [7.531486350989069]
Federated Learning (FL) is a distributed machine learning paradigm where clients collaboratively train a model using their local (human-generated) datasets.
This paper provides a quantitative study on the impact of label noise on FL.
Our empirical results show that the global model accuracy linearly decreases as the noise level increases.
arXiv Detail & Related papers (2022-11-15T00:40:55Z) - FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with
Noisy Labels [49.47228898303909]
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
Local training on noisy labels can easily result in overfitting to noisy labels, which is devastating to the global model through aggregation.
We develop a simple two-level sampling method "FedNoiL" that selects clients for more robust global aggregation on the server.
arXiv Detail & Related papers (2022-05-20T12:06:39Z) - FedCorr: Multi-Stage Federated Learning for Label Noise Correction [80.9366438220228]
Federated learning (FL) is a privacy-preserving distributed learning paradigm that enables clients to jointly train a global model.
We propose $textttFedCorr$, a general multi-stage framework to tackle heterogeneous label noise in FL.
Experiments conducted on CIFAR-10/100 with federated synthetic label noise, and on a real-world noisy dataset, Clothing1M, demonstrate that $textttFedCorr$ is robust to label noise.
arXiv Detail & Related papers (2022-04-10T12:51:18Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z) - FOCUS: Dealing with Label Quality Disparity in Federated Learning [25.650278226178298]
We propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge.
FOCUS quantifies the credibility of the client local data without directly observing them.
It effectively identifies clients with noisy labels and reduces their impact on the model performance.
arXiv Detail & Related papers (2020-01-29T09:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.