SSEGEP: Small SEGment Emphasized Performance evaluation metric for
medical image segmentation
- URL: http://arxiv.org/abs/2109.03435v1
- Date: Wed, 8 Sep 2021 05:05:49 GMT
- Title: SSEGEP: Small SEGment Emphasized Performance evaluation metric for
medical image segmentation
- Authors: Ammu R, Neelam Sinha
- Abstract summary: "SSEGEP"(Small SEGment Emphasized Performance evaluation metric), (range : 0(Bad) to 1(Good))
"SSEGEP"(Small SEGment Emphasized Performance evaluation metric), (range : 0(Bad) to 1(Good))
Across 33 fundus images, where the largest exudate is 1.41%, and the smallest is 0.0002% of the image, the proposed metric is 30% closer to MOS, as compared to Dice Similarity Coefficient (DSC)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automatic image segmentation is a critical component of medical image
analysis, and hence quantifying segmentation performance is crucial. Challenges
in medical image segmentation are mainly due to spatial variations of regions
to be segmented and imbalance in distribution of classes. Commonly used metrics
treat all detected pixels, indiscriminately. However, pixels in smaller
segments must be treated differently from pixels in larger segments, as
detection of smaller ones aid in early treatment of associated disease and are
also easier to miss. To address this, we propose a novel evaluation metric for
segmentation performance, emphasizing smaller segments, by assigning higher
weightage to smaller segment pixels. Weighted false positives are also
considered in deriving the new metric named, "SSEGEP"(Small SEGment Emphasized
Performance evaluation metric), (range : 0(Bad) to 1(Good)). The experiments
were performed on diverse anatomies(eye, liver, pancreas and breast) from
publicly available datasets to show applicability of the proposed metric across
different imaging techniques. Mean opinion score (MOS) and statistical
significance testing is used to quantify the relevance of proposed approach.
Across 33 fundus images, where the largest exudate is 1.41%, and the smallest
is 0.0002% of the image, the proposed metric is 30% closer to MOS, as compared
to Dice Similarity Coefficient (DSC). Statistical significance testing resulted
in promising p-value of order 10^{-18} with SSEGEP for hepatic tumor compared
to DSC. The proposed metric is found to perform better for the images having
multiple segments for a single label.
Related papers
- Every Component Counts: Rethinking the Measure of Success for Medical Semantic Segmentation in Multi-Instance Segmentation Tasks [60.80828925396154]
We present Connected-Component(CC)-Metrics, a novel semantic segmentation evaluation protocol.
We motivate this setup in the common medical scenario of semantic segmentation in a full-body PET/CT.
We show how existing semantic segmentation metrics suffer from a bias towards larger connected components.
arXiv Detail & Related papers (2024-10-24T12:26:05Z) - MedCLIP-SAM: Bridging Text and Image Towards Universal Medical Image Segmentation [2.2585213273821716]
We propose a novel framework, called MedCLIP-SAM, that combines CLIP and SAM models to generate segmentation of clinical scans.
By extensively testing three diverse segmentation tasks and medical image modalities, our proposed framework has demonstrated excellent accuracy.
arXiv Detail & Related papers (2024-03-29T15:59:11Z) - I-MedSAM: Implicit Medical Image Segmentation with Segment Anything [24.04558900909617]
We propose I-MedSAM, which leverages the benefits of both continuous representations and SAM to obtain better cross-domain ability and accurate boundary delineation.
Our proposed method with only 1.6M trainable parameters outperforms existing methods including discrete and implicit methods.
arXiv Detail & Related papers (2023-11-28T00:43:52Z) - MLN-net: A multi-source medical image segmentation method for clustered
microcalcifications using multiple layer normalization [8.969596531778121]
We propose a novel framework named MLN-net, which can accurately segment multi-source images using only single source images.
In this paper, extensive experiments validate the effectiveness of MLN-net in segmenting clustered microcalcifications from different domains.
arXiv Detail & Related papers (2023-09-06T05:56:30Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical
Image Segmentation [92.9634065964963]
We present a new semi-supervised segmentation model, namely, conservative-radical network (CoraNet) based on our uncertainty estimation and separate self-training strategy.
Compared with the current state of the art, our CoraNet has demonstrated superior performance.
arXiv Detail & Related papers (2021-10-17T08:49:33Z) - Fast whole-slide cartography in colon cancer histology using superpixels
and CNN classification [0.22312377591335414]
Whole-slide-images typically have to be divided into smaller patches which are then analyzed individually using machine learning-based approaches.
We propose to subdivide the image into coherent regions prior to classification by grouping visually similar adjacent image pixels into larger segments, i.e. superpixels.
The algorithm has been developed and validated on a dataset of 159 hand-annotated whole-slide-images of colon resections and its performance has been compared to a standard patch-based approach.
arXiv Detail & Related papers (2021-06-30T08:34:06Z) - Modeling the Probabilistic Distribution of Unlabeled Data forOne-shot
Medical Image Segmentation [40.41161371507547]
We develop a data augmentation method for one-shot brain magnetic resonance imaging (MRI) image segmentation.
Our method exploits only one labeled MRI image (named atlas) and a few unlabeled images.
Our method outperforms the state-of-the-art one-shot medical segmentation methods.
arXiv Detail & Related papers (2021-02-03T12:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.