Multi-Modal Evaluation Approach for Medical Image Segmentation
- URL: http://arxiv.org/abs/2302.04135v1
- Date: Wed, 8 Feb 2023 15:31:33 GMT
- Title: Multi-Modal Evaluation Approach for Medical Image Segmentation
- Authors: Seyed M.R. Modaresi, Aomar Osmani, Mohammadreza Razzazi, Abdelghani
Chibani
- Abstract summary: We propose a novel multi-modal evaluation (MME) approach to measure the effectiveness of different segmentation methods.
We introduce new relevant and interpretable characteristics, including detection property, boundary alignment, uniformity, total volume, and relative volume.
Our proposed approach is open-source and publicly available for use.
- Score: 4.989480853499916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Manual segmentation of medical images (e.g., segmenting tumors in CT scans)
is a high-effort task that can be accelerated with machine learning techniques.
However, selecting the right segmentation approach depends on the evaluation
function, particularly in medical image segmentation where we must deal with
dependency between voxels. For instance, in contrast to classical systems where
the predictions are either correct or incorrect, predictions in medical image
segmentation may be partially correct and incorrect simultaneously. In this
paper, we explore this expressiveness to extract the useful properties of these
systems and formally define a novel multi-modal evaluation (MME) approach to
measure the effectiveness of different segmentation methods. This approach
improves the segmentation evaluation by introducing new relevant and
interpretable characteristics, including detection property, boundary
alignment, uniformity, total volume, and relative volume. Our proposed approach
is open-source and publicly available for use. We have conducted several
reproducible experiments, including the segmentation of pancreas, liver tumors,
and multi-organs datasets, to show the applicability of the proposed approach.
Related papers
- MedUHIP: Towards Human-In-the-Loop Medical Segmentation [5.520419627866446]
Medical image segmentation is particularly complicated by inherent uncertainties.
We propose a novel approach that integrates an textbfuncertainty-aware model with textbfhuman-in-the-loop interaction
Our method showcases superior segmentation capabilities, outperforming a wide range of deterministic and uncertainty-aware models.
arXiv Detail & Related papers (2024-08-03T01:06:02Z) - Deep Multimodal Fusion of Data with Heterogeneous Dimensionality via
Projective Networks [4.933439602197885]
We propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D+2D)
The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging.
Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
arXiv Detail & Related papers (2024-02-02T11:03:33Z) - CGAM: Click-Guided Attention Module for Interactive Pathology Image
Segmentation via Backpropagating Refinement [8.590026259176806]
Tumor region segmentation is an essential task for the quantitative analysis of digital pathology.
Recent deep neural networks have shown state-of-the-art performance in various image-segmentation tasks.
We propose an interactive segmentation method that allows users to refine the output of deep neural networks through click-type user interactions.
arXiv Detail & Related papers (2023-07-03T13:45:24Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - An Embarrassingly Simple Consistency Regularization Method for
Semi-Supervised Medical Image Segmentation [0.0]
The scarcity of pixel-level annotation is a prevalent problem in medical image segmentation tasks.
We introduce a novel regularization strategy involving computation-based mixing for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2022-02-01T16:21:14Z) - Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical
Image Segmentation [92.9634065964963]
We present a new semi-supervised segmentation model, namely, conservative-radical network (CoraNet) based on our uncertainty estimation and separate self-training strategy.
Compared with the current state of the art, our CoraNet has demonstrated superior performance.
arXiv Detail & Related papers (2021-10-17T08:49:33Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.