Diversified and Personalized Multi-rater Medical Image Segmentation
- URL: http://arxiv.org/abs/2403.13417v1
- Date: Wed, 20 Mar 2024 09:00:19 GMT
- Title: Diversified and Personalized Multi-rater Medical Image Segmentation
- Authors: Yicheng Wu, Xiangde Luo, Zhe Xu, Xiaoqing Guo, Lie Ju, Zongyuan Ge, Wenjun Liao, Jianfei Cai,
- Abstract summary: We propose a two-stage framework named D-Persona (first Diversification and then Personalization).
In Stage I, we exploit multiple given annotations to train a Probabilistic U-Net model, with a bound-constrained loss to improve the prediction diversity.
In Stage II, we design multiple attention-based projection heads to adaptively query the corresponding expert prompts from the shared latent space, and then perform the personalized medical image segmentation.
- Score: 43.47142636000329
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Annotation ambiguity due to inherent data uncertainties such as blurred boundaries in medical scans and different observer expertise and preferences has become a major obstacle for training deep-learning based medical image segmentation models. To address it, the common practice is to gather multiple annotations from different experts, leading to the setting of multi-rater medical image segmentation. Existing works aim to either merge different annotations into the "groundtruth" that is often unattainable in numerous medical contexts, or generate diverse results, or produce personalized results corresponding to individual expert raters. Here, we bring up a more ambitious goal for multi-rater medical image segmentation, i.e., obtaining both diversified and personalized results. Specifically, we propose a two-stage framework named D-Persona (first Diversification and then Personalization). In Stage I, we exploit multiple given annotations to train a Probabilistic U-Net model, with a bound-constrained loss to improve the prediction diversity. In this way, a common latent space is constructed in Stage I, where different latent codes denote diversified expert opinions. Then, in Stage II, we design multiple attention-based projection heads to adaptively query the corresponding expert prompts from the shared latent space, and then perform the personalized medical image segmentation. We evaluated the proposed model on our in-house Nasopharyngeal Carcinoma dataset and the public lung nodule dataset (i.e., LIDC-IDRI). Extensive experiments demonstrated our D-Persona can provide diversified and personalized results at the same time, achieving new SOTA performance for multi-rater medical image segmentation. Our code will be released at https://github.com/ycwu1997/D-Persona.
Related papers
- Multi-rater Prompting for Ambiguous Medical Image Segmentation [12.452584289825849]
Multi-rater annotations commonly occur when medical images are independently annotated by multiple experts (raters)
We propose a multi-rater prompt-based approach to address these two challenges altogether.
arXiv Detail & Related papers (2024-04-11T09:13:50Z) - Inter-Rater Uncertainty Quantification in Medical Image Segmentation via
Rater-Specific Bayesian Neural Networks [7.642026462053574]
We introduce a novel Bayesian neural network-based architecture to estimate inter-rater uncertainty in medical image segmentation.
Firstly, we introduce a one-encoder-multi-decoder architecture specifically tailored for uncertainty estimation.
Secondly, we propose Bayesian modeling for the new architecture, allowing efficient capture of the inter-rater distribution.
arXiv Detail & Related papers (2023-06-28T20:52:51Z) - Annotator Consensus Prediction for Medical Image Segmentation with
Diffusion Models [70.3497683558609]
A major challenge in the segmentation of medical images is the large inter- and intra-observer variability in annotations provided by multiple experts.
We propose a novel method for multi-expert prediction using diffusion models.
arXiv Detail & Related papers (2023-06-15T10:01:05Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Using Soft Labels to Model Uncertainty in Medical Image Segmentation [0.0]
We propose a simple method to obtain soft labels from the annotations of multiple physicians.
For each image, our method produces a single well-calibrated output that can be thresholded at multiple confidence levels.
We evaluated our method on the MICCAI 2021 QUBIQ challenge, showing that it performs well across multiple medical image segmentation tasks.
arXiv Detail & Related papers (2021-09-26T14:47:18Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.