MedUHIP: Towards Human-In-the-Loop Medical Segmentation
- URL: http://arxiv.org/abs/2408.01620v1
- Date: Sat, 3 Aug 2024 01:06:02 GMT
- Title: MedUHIP: Towards Human-In-the-Loop Medical Segmentation
- Authors: Jiayuan Zhu, Junde Wu,
- Abstract summary: Medical image segmentation is particularly complicated by inherent uncertainties.
We propose a novel approach that integrates an textbfuncertainty-aware model with textbfhuman-in-the-loop interaction
Our method showcases superior segmentation capabilities, outperforming a wide range of deterministic and uncertainty-aware models.
- Score: 5.520419627866446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although segmenting natural images has shown impressive performance, these techniques cannot be directly applied to medical image segmentation. Medical image segmentation is particularly complicated by inherent uncertainties. For instance, the ambiguous boundaries of tissues can lead to diverse but plausible annotations from different clinicians. These uncertainties cause significant discrepancies in clinical interpretations and impact subsequent medical interventions. Therefore, achieving quantitative segmentations from uncertain medical images becomes crucial in clinical practice. To address this, we propose a novel approach that integrates an \textbf{uncertainty-aware model} with \textbf{human-in-the-loop interaction}. The uncertainty-aware model proposes several plausible segmentations to address the uncertainties inherent in medical images, while the human-in-the-loop interaction iteratively modifies the segmentation under clinician supervision. This collaborative model ensures that segmentation is not solely dependent on automated techniques but is also refined through clinician expertise. As a result, our approach represents a significant advancement in the field which enhances the safety of medical image segmentation. It not only offers a comprehensive solution to produce quantitative segmentation from inherent uncertain medical images, but also establishes a synergistic balance between algorithmic precision and clincian knowledge. We evaluated our method on various publicly available multi-clinician annotated datasets: REFUGE2, LIDC-IDRI and QUBIQ. Our method showcases superior segmentation capabilities, outperforming a wide range of deterministic and uncertainty-aware models. We also demonstrated that our model produced significantly better results with fewer interactions compared to previous interactive models. We will release the code to foster further research in this area.
Related papers
- QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Swin Deformable Attention Hybrid U-Net for Medical Image Segmentation [3.407509559779547]
We propose to incorporate the Shifted Window (Swin) Deformable Attention into a hybrid architecture to improve segmentation performance.
Our proposed Swin Deformable Attention Hybrid UNet (SDAH-UNet) demonstrates state-of-the-art performance on both anatomical and lesion segmentation tasks.
arXiv Detail & Related papers (2023-02-28T09:54:53Z) - Multi-Modal Evaluation Approach for Medical Image Segmentation [4.989480853499916]
We propose a novel multi-modal evaluation (MME) approach to measure the effectiveness of different segmentation methods.
We introduce new relevant and interpretable characteristics, including detection property, boundary alignment, uniformity, total volume, and relative volume.
Our proposed approach is open-source and publicly available for use.
arXiv Detail & Related papers (2023-02-08T15:31:33Z) - Bayesian approaches for Quantifying Clinicians' Variability in Medical
Image Quantification [0.16314780449435543]
We show that Bayesian predictive distribution parameterized by deep neural networks could approximate the clinicians' inter-intra variability.
We show a new perspective in analyzing medical images quantitatively by providing clinical measurement uncertainty.
arXiv Detail & Related papers (2022-07-05T08:04:02Z) - CRISP - Reliable Uncertainty Estimation for Medical Image Segmentation [6.197149831796131]
We propose CRISP a ContRastive Image for uncertainty Prediction method.
At its core, CRISP implements a contrastive method to learn a joint latent space which encodes a distribution of valid segmentations.
We use this joint latent space to compare predictions to thousands of latent vectors and provide anatomically consistent uncertainty maps.
arXiv Detail & Related papers (2022-06-15T16:56:58Z) - Using Soft Labels to Model Uncertainty in Medical Image Segmentation [0.0]
We propose a simple method to obtain soft labels from the annotations of multiple physicians.
For each image, our method produces a single well-calibrated output that can be thresholded at multiple confidence levels.
We evaluated our method on the MICCAI 2021 QUBIQ challenge, showing that it performs well across multiple medical image segmentation tasks.
arXiv Detail & Related papers (2021-09-26T14:47:18Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.