Med-2D SegNet: A Light Weight Deep Neural Network for Medical 2D Image Segmentation
- URL: http://arxiv.org/abs/2504.14715v1
- Date: Sun, 20 Apr 2025 19:04:43 GMT
- Title: Med-2D SegNet: A Light Weight Deep Neural Network for Medical 2D Image Segmentation
- Authors: Md. Sanaullah Chowdhury, Salauddin Tapu, Noyon Kumar Sarkar, Ferdous Bin Ali, Lameya Sabrin,
- Abstract summary: We introduce Med-2D SegNet, a novel and highly efficient segmentation architecture.<n>Med-2D SegNet achieves state-of-the-art performance across multiple benchmark datasets.<n>Central to its success is the compact Med Block, a specialized encoder design.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and efficient medical image segmentation is crucial for advancing clinical diagnostics and surgical planning, yet remains a complex challenge due to the variability in anatomical structures and the demand for low-complexity models. In this paper, we introduced Med-2D SegNet, a novel and highly efficient segmentation architecture that delivers outstanding accuracy while maintaining a minimal computational footprint. Med-2D SegNet achieves state-of-the-art performance across multiple benchmark datasets, including KVASIR-SEG, PH2, EndoVis, and GLAS, with an average Dice similarity coefficient (DSC) of 89.77% across 20 diverse datasets. Central to its success is the compact Med Block, a specialized encoder design that incorporates dimension expansion and parameter reduction, enabling precise feature extraction while keeping model parameters to a low count of just 2.07 million. Med-2D SegNet excels in cross-dataset generalization, particularly in polyp segmentation, where it was trained on KVASIR-SEG and showed strong performance on unseen datasets, demonstrating its robustness in zero-shot learning scenarios, even though we acknowledge that further improvements are possible. With top-tier performance in both binary and multi-class segmentation, Med-2D SegNet redefines the balance between accuracy and efficiency, setting a new benchmark for medical image analysis. This work paves the way for developing accessible, high-performance diagnostic tools suitable for clinical environments and resource-constrained settings, making it a step forward in the democratization of advanced medical technology.
Related papers
- Multi-encoder nnU-Net outperforms Transformer models with self-supervised pretraining [0.0]
This study addresses the essential task of medical image segmentation, which involves the automatic identification and delineation of anatomical structures and pathological regions in medical images.<n>We propose a novel self-supervised learning Multi-encoder nnU-Net architecture designed to process multiple MRI modalities independently through separate encoders.<n>Our Multi-encoder nnU-Net demonstrates exceptional performance, achieving a Dice Similarity Coefficient (DSC) of 93.72%, which surpasses that of other models such as vanilla nnU-Net, SegResNet, and Swin UNETR.
arXiv Detail & Related papers (2025-04-04T14:31:06Z) - Assessing the Performance of the DINOv2 Self-supervised Learning Vision Transformer Model for the Segmentation of the Left Atrium from MRI Images [1.2499537119440245]
DINOv2 is a self-supervised learning vision transformer trained on natural images for LA segmentation using MRI.
We demonstrate its ability to provide accurate & consistent segmentation, achieving a mean Dice score of.871 & a Jaccard Index of.792 for end-to-end fine-tuning.
These results suggest that DINOv2 effectively adapts to MRI with limited data, highlighting its potential as a competitive tool for segmentation & encouraging broader use in medical imaging.
arXiv Detail & Related papers (2024-11-14T17:15:51Z) - MedCLIP-SAMv2: Towards Universal Text-Driven Medical Image Segmentation [2.2585213273821716]
We introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans.<n>Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss.<n>We also investigate using zero-shot segmentation labels within a weakly supervised paradigm to enhance segmentation quality further.
arXiv Detail & Related papers (2024-09-28T23:10:37Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - A Mutual Inclusion Mechanism for Precise Boundary Segmentation in Medical Images [2.9137615132901704]
We present a novel deep learning-based approach, MIPC-Net, for precise boundary segmentation in medical images.
We introduce the MIPC module, which enhances the focus on channel information when extracting position features.
We also propose the GL-MIPC-Residue, a global residual connection that enhances the integration of the encoder and decoder.
arXiv Detail & Related papers (2024-04-12T02:14:35Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - LDMRes-Net: Enabling Efficient Medical Image Segmentation on IoT and
Edge Platforms [9.626726110488386]
We propose a lightweight dual-multiscale residual block-based computational neural network tailored for medical image segmentation on IoT and edge platforms.
LDMRes-Net overcomes limitations with its remarkably low number of learnable parameters (0.072M), making it highly suitable for resource-constrained devices.
arXiv Detail & Related papers (2023-06-09T10:34:18Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-fine
Framework and Its Adversarial Examples [74.92488215859991]
We propose a novel 3D-based coarse-to-fine framework to efficiently tackle these challenges.
The proposed 3D-based framework outperforms their 2D counterparts by a large margin since it can leverage the rich spatial information along all three axes.
We conduct experiments on three datasets, the NIH pancreas dataset, the JHMI pancreas dataset and the JHMI pathological cyst dataset.
arXiv Detail & Related papers (2020-10-29T15:39:19Z) - DONet: Dual Objective Networks for Skin Lesion Segmentation [77.9806410198298]
We propose a simple yet effective framework, named Dual Objective Networks (DONet), to improve the skin lesion segmentation.
Our DONet adopts two symmetric decoders to produce different predictions for approaching different objectives.
To address the challenge of large variety of lesion scales and shapes in dermoscopic images, we additionally propose a recurrent context encoding module (RCEM)
arXiv Detail & Related papers (2020-08-19T06:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.