FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy
Labels
- URL: http://arxiv.org/abs/2312.12263v3
- Date: Fri, 16 Feb 2024 09:32:51 GMT
- Title: FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy
Labels
- Authors: Jichang Li, Guanbin Li, Hui Cheng, Zicheng Liao, Yizhou Yu
- Abstract summary: Federated learning with noisy labels (F-LNL) aims at seeking an optimal server model via collaborative distributed learning.
We present FedDiv to tackle the challenges of F-LNL. Specifically, we propose a global noise filter called Federated Noise Filter.
- Score: 99.70895640578816
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning with noisy labels (F-LNL) aims at seeking an optimal
server model via collaborative distributed learning by aggregating multiple
client models trained with local noisy or clean samples. On the basis of a
federated learning framework, recent advances primarily adopt label noise
filtering to separate clean samples from noisy ones on each client, thereby
mitigating the negative impact of label noise. However, these prior methods do
not learn noise filters by exploiting knowledge across all clients, leading to
sub-optimal and inferior noise filtering performance and thus damaging training
stability. In this paper, we present FedDiv to tackle the challenges of F-LNL.
Specifically, we propose a global noise filter called Federated Noise Filter
for effectively identifying samples with noisy labels on every client, thereby
raising stability during local training sessions. Without sacrificing data
privacy, this is achieved by modeling the global distribution of label noise
across all clients. Then, in an effort to make the global model achieve higher
performance, we introduce a Predictive Consistency based Sampler to identify
more credible local data for local model training, thus preventing noise
memorization and further boosting the training stability. Extensive experiments
on CIFAR-10, CIFAR-100, and Clothing1M demonstrate that \texttt{FedDiv}
achieves superior performance over state-of-the-art F-LNL methods under
different label noise settings for both IID and non-IID data partitions. Source
code is publicly available at https://github.com/lijichang/FLNL-FedDiv.
Related papers
- Collaboratively Learning Federated Models from Noisy Decentralized Data [21.3209961590772]
Federated learning (FL) has emerged as a prominent method for collaboratively training machine learning models using local data from edge devices.
We focus on addressing the problem of noisy data in the input space, an under-explored area compared to the label noise.
We propose a noise-aware FL aggregation method, namely Federated Noise-Sifting (FedNS), which can be used as a plug-in approach in conjunction with widely used FL strategies.
arXiv Detail & Related papers (2024-09-03T18:00:51Z) - Federated Learning with Extremely Noisy Clients via Negative
Distillation [70.13920804879312]
Federated learning (FL) has shown remarkable success in cooperatively training deep models, while struggling with noisy labels.
We propose a novel approach, called negative distillation (FedNed) to leverage models trained on noisy clients.
FedNed first identifies noisy clients and employs rather than discards the noisy clients in a knowledge distillation manner.
arXiv Detail & Related papers (2023-12-20T01:59:48Z) - Learning Cautiously in Federated Learning with Noisy and Heterogeneous
Clients [4.782145666637457]
Federated learning (FL) is a distributed framework for collaboratively training with privacy guarantees.
In real-world scenarios, clients may have Non-IID data (local class imbalance) with poor annotation quality (label noise)
We propose FedCNI without using an additional clean proxy dataset.
It includes a noise-resilient local solver and a robust global aggregator.
arXiv Detail & Related papers (2023-04-06T06:47:14Z) - Neighborhood Collective Estimation for Noisy Label Identification and
Correction [92.20697827784426]
Learning with noisy labels (LNL) aims at designing strategies to improve model performance and generalization by mitigating the effects of model overfitting to noisy labels.
Recent advances employ the predicted label distributions of individual samples to perform noise verification and noisy label correction, easily giving rise to confirmation bias.
We propose Neighborhood Collective Estimation, in which the predictive reliability of a candidate sample is re-estimated by contrasting it against its feature-space nearest neighbors.
arXiv Detail & Related papers (2022-08-05T14:47:22Z) - FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with
Noisy Labels [49.47228898303909]
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
Local training on noisy labels can easily result in overfitting to noisy labels, which is devastating to the global model through aggregation.
We develop a simple two-level sampling method "FedNoiL" that selects clients for more robust global aggregation on the server.
arXiv Detail & Related papers (2022-05-20T12:06:39Z) - FedCorr: Multi-Stage Federated Learning for Label Noise Correction [80.9366438220228]
Federated learning (FL) is a privacy-preserving distributed learning paradigm that enables clients to jointly train a global model.
We propose $textttFedCorr$, a general multi-stage framework to tackle heterogeneous label noise in FL.
Experiments conducted on CIFAR-10/100 with federated synthetic label noise, and on a real-world noisy dataset, Clothing1M, demonstrate that $textttFedCorr$ is robust to label noise.
arXiv Detail & Related papers (2022-04-10T12:51:18Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.