FedIA: Federated Medical Image Segmentation with Heterogeneous Annotation Completeness
- URL: http://arxiv.org/abs/2407.02280v2
- Date: Wed, 3 Jul 2024 07:12:34 GMT
- Title: FedIA: Federated Medical Image Segmentation with Heterogeneous Annotation Completeness
- Authors: Yangyang Xiang, Nannan Wu, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan,
- Abstract summary: Federated learning has emerged as a compelling paradigm for medical image segmentation.
This paper highlights a prevalent challenge in medical practice: incomplete annotations.
We introduce a novel solution, named FedIA, to tackle this issue.
- Score: 30.780654470392125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has emerged as a compelling paradigm for medical image segmentation, particularly in light of increasing privacy concerns. However, most of the existing research relies on relatively stringent assumptions regarding the uniformity and completeness of annotations across clients. Contrary to this, this paper highlights a prevalent challenge in medical practice: incomplete annotations. Such annotations can introduce incorrectly labeled pixels, potentially undermining the performance of neural networks in supervised learning. To tackle this issue, we introduce a novel solution, named FedIA. Our insight is to conceptualize incomplete annotations as noisy data (i.e., low-quality data), with a focus on mitigating their adverse effects. We begin by evaluating the completeness of annotations at the client level using a designed indicator. Subsequently, we enhance the influence of clients with more comprehensive annotations and implement corrections for incomplete ones, thereby ensuring that models are trained on accurate data. Our method's effectiveness is validated through its superior performance on two extensively used medical image segmentation datasets, outperforming existing solutions. The code is available at https://github.com/HUSTxyy/FedIA.
Related papers
- Affinity-Graph-Guided Contractive Learning for Pretext-Free Medical Image Segmentation with Minimal Annotation [55.325956390997]
This paper proposes an affinity-graph-guided semi-supervised contrastive learning framework (Semi-AGCL) for medical image segmentation.
The framework first designs an average-patch-entropy-driven inter-patch sampling method, which can provide a robust initial feature space.
With merely 10% of the complete annotation set, our model approaches the accuracy of the fully annotated baseline, manifesting a marginal deviation of only 2.52%.
arXiv Detail & Related papers (2024-10-14T10:44:47Z) - Deep Bayesian Active Learning-to-Rank with Relative Annotation for Estimation of Ulcerative Colitis Severity [10.153691271169745]
We propose a deep Bayesian active learning-to-rank that automatically selects appropriate pairs for relative annotation.
Our method preferentially annotates unlabeled pairs with high learning efficiency from the model uncertainty of the samples.
arXiv Detail & Related papers (2024-09-08T02:19:40Z) - FedA3I: Annotation Quality-Aware Aggregation for Federated Medical Image
Segmentation against Heterogeneous Annotation Noise [10.417576145123256]
Federated learning (FL) has emerged as a promising paradigm for training segmentation models on decentralized medical data.
In this paper, we, for the first time, identify and tackle this problem.
Experiments on two real-world medical image segmentation datasets demonstrate the superior performance of FedA$3$I against state-of-the-art approaches.
arXiv Detail & Related papers (2023-12-20T08:42:57Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - MIPR:Automatic Annotation of Medical Images with Pixel Rearrangement [7.39560318487728]
We pro?pose a novel approach to solve the lack of annotated data from another angle, called medical image pixel rearrangement (short in MIPR)
The MIPR combines image-editing and pseudo-label technology to obtain labeled data.
Experiments on the ISIC18 show that the effect of the data annotated by our method for segmentation task is is equal to or even better than that of doctors annotations.
arXiv Detail & Related papers (2022-04-22T05:54:14Z) - FedMed-ATL: Misaligned Unpaired Brain Image Synthesis via Affine
Transform Loss [58.58979566599889]
We propose a novel self-supervised learning (FedMed) for brain image synthesis.
An affine transform loss (ATL) was formulated to make use of severely distorted images without violating privacy legislation.
The proposed method demonstrates advanced performance in both the quality of synthesized results under a severely misaligned and unpaired data setting.
arXiv Detail & Related papers (2022-01-29T13:45:39Z) - Federated Semi-supervised Medical Image Classification via Inter-client
Relation Matching [58.26619456972598]
Federated learning (FL) has emerged with increasing popularity to collaborate distributed medical institutions for training deep networks.
This paper studies a practical yet challenging FL problem, named textitFederated Semi-supervised Learning (FSSL)
We present a novel approach for this problem, which improves over traditional consistency regularization mechanism with a new inter-client relation matching scheme.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Every Annotation Counts: Multi-label Deep Supervision for Medical Image
Segmentation [85.0078917060652]
We propose a semi-weakly supervised segmentation algorithm to overcome this barrier.
Our approach is based on a new formulation of deep supervision and student-teacher model.
With our novel training regime for segmentation that flexibly makes use of images that are either fully labeled, marked with bounding boxes, just global labels, or not at all, we are able to cut the requirement for expensive labels by 94.22%.
arXiv Detail & Related papers (2021-04-27T14:51:19Z) - Cascaded Robust Learning at Imperfect Labels for Chest X-ray
Segmentation [61.09321488002978]
We present a novel cascaded robust learning framework for chest X-ray segmentation with imperfect annotation.
Our model consists of three independent network, which can effectively learn useful information from the peer networks.
Our methods could achieve a significant improvement on the accuracy in segmentation tasks compared to the previous methods.
arXiv Detail & Related papers (2021-04-05T15:50:16Z) - Extreme Consistency: Overcoming Annotation Scarcity and Domain Shifts [2.707399740070757]
Supervised learning has proved effective for medical image analysis.
It can utilize only the small labeled portion of data.
It fails to leverage the large amounts of unlabeled data that is often available in medical image datasets.
arXiv Detail & Related papers (2020-04-15T15:32:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.