FedA3I: Annotation Quality-Aware Aggregation for Federated Medical Image
Segmentation against Heterogeneous Annotation Noise
- URL: http://arxiv.org/abs/2312.12838v2
- Date: Thu, 18 Jan 2024 15:27:37 GMT
- Title: FedA3I: Annotation Quality-Aware Aggregation for Federated Medical Image
Segmentation against Heterogeneous Annotation Noise
- Authors: Nannan Wu, Zhaobin Sun, Zengqiang Yan, Li Yu
- Abstract summary: Federated learning (FL) has emerged as a promising paradigm for training segmentation models on decentralized medical data.
In this paper, we, for the first time, identify and tackle this problem.
Experiments on two real-world medical image segmentation datasets demonstrate the superior performance of FedA$3$I against state-of-the-art approaches.
- Score: 10.417576145123256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has emerged as a promising paradigm for training
segmentation models on decentralized medical data, owing to its
privacy-preserving property. However, existing research overlooks the prevalent
annotation noise encountered in real-world medical datasets, which limits the
performance ceilings of FL. In this paper, we, for the first time, identify and
tackle this problem. For problem formulation, we propose a contour evolution
for modeling non-independent and identically distributed (Non-IID) noise across
pixels within each client and then extend it to the case of multi-source data
to form a heterogeneous noise model (i.e., Non-IID annotation noise across
clients). For robust learning from annotations with such two-level Non-IID
noise, we emphasize the importance of data quality in model aggregation,
allowing high-quality clients to have a greater impact on FL. To achieve this,
we propose Federated learning with Annotation quAlity-aware AggregatIon, named
FedA3I, by introducing a quality factor based on client-wise noise estimation.
Specifically, noise estimation at each client is accomplished through the
Gaussian mixture model and then incorporated into model aggregation in a
layer-wise manner to up-weight high-quality clients. Extensive experiments on
two real-world medical image segmentation datasets demonstrate the superior
performance of FedA$^3$I against the state-of-the-art approaches in dealing
with cross-client annotation noise. The code is available at
https://github.com/wnn2000/FedAAAI.
Related papers
- Collaboratively Learning Federated Models from Noisy Decentralized Data [21.3209961590772]
Federated learning (FL) has emerged as a prominent method for collaboratively training machine learning models using local data from edge devices.
We focus on addressing the problem of noisy data in the input space, an under-explored area compared to the label noise.
We propose a noise-aware FL aggregation method, namely Federated Noise-Sifting (FedNS), which can be used as a plug-in approach in conjunction with widely used FL strategies.
arXiv Detail & Related papers (2024-09-03T18:00:51Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - FedDiv: Collaborative Noise Filtering for Federated Learning with Noisy
Labels [99.70895640578816]
Federated learning with noisy labels (F-LNL) aims at seeking an optimal server model via collaborative distributed learning.
We present FedDiv to tackle the challenges of F-LNL. Specifically, we propose a global noise filter called Federated Noise Filter.
arXiv Detail & Related papers (2023-12-19T15:46:47Z) - Learning Confident Classifiers in the Presence of Label Noise [5.829762367794509]
This paper proposes a probabilistic model for noisy observations that allows us to build a confident classification and segmentation models.
Our experiments show that our algorithm outperforms state-of-the-art solutions for the considered classification and segmentation problems.
arXiv Detail & Related papers (2023-01-02T04:27:25Z) - Quantifying the Impact of Label Noise on Federated Learning [7.531486350989069]
Federated Learning (FL) is a distributed machine learning paradigm where clients collaboratively train a model using their local (human-generated) datasets.
This paper provides a quantitative study on the impact of label noise on FL.
Our empirical results show that the global model accuracy linearly decreases as the noise level increases.
arXiv Detail & Related papers (2022-11-15T00:40:55Z) - FedCorr: Multi-Stage Federated Learning for Label Noise Correction [80.9366438220228]
Federated learning (FL) is a privacy-preserving distributed learning paradigm that enables clients to jointly train a global model.
We propose $textttFedCorr$, a general multi-stage framework to tackle heterogeneous label noise in FL.
Experiments conducted on CIFAR-10/100 with federated synthetic label noise, and on a real-world noisy dataset, Clothing1M, demonstrate that $textttFedCorr$ is robust to label noise.
arXiv Detail & Related papers (2022-04-10T12:51:18Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv Detail & Related papers (2021-04-13T17:54:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.