PathSegDiff: Pathology Segmentation using Diffusion model representations
- URL: http://arxiv.org/abs/2504.06950v1
- Date: Wed, 09 Apr 2025 14:58:21 GMT
- Title: PathSegDiff: Pathology Segmentation using Diffusion model representations
- Authors: Sachin Kumar Danisetty, Alexandros Graikos, Srikar Yellapragada, Dimitris Samaras,
- Abstract summary: We propose PathSegDiff, a novel approach for histopathology image segmentation that leverages Latent Diffusion Models (LDMs) as pre-trained featured extractors.<n>Our method utilizes a pathology-specific LDM, guided by a self-supervised encoder, to extract rich semantic information from H&E stained histopathology images.<n>Our experiments demonstrate significant improvements over traditional methods on the BCSS and GlaS datasets.
- Score: 63.20694440934692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image segmentation is crucial in many computational pathology pipelines, including accurate disease diagnosis, subtyping, outcome, and survivability prediction. The common approach for training a segmentation model relies on a pre-trained feature extractor and a dataset of paired image and mask annotations. These are used to train a lightweight prediction model that translates features into per-pixel classes. The choice of the feature extractor is central to the performance of the final segmentation model, and recent literature has focused on finding tasks to pre-train the feature extractor. In this paper, we propose PathSegDiff, a novel approach for histopathology image segmentation that leverages Latent Diffusion Models (LDMs) as pre-trained featured extractors. Our method utilizes a pathology-specific LDM, guided by a self-supervised encoder, to extract rich semantic information from H\&E stained histopathology images. We employ a simple, fully convolutional network to process the features extracted from the LDM and generate segmentation masks. Our experiments demonstrate significant improvements over traditional methods on the BCSS and GlaS datasets, highlighting the effectiveness of domain-specific diffusion pre-training in capturing intricate tissue structures and enhancing segmentation accuracy in histopathology images.
Related papers
- Segmentation by Factorization: Unsupervised Semantic Segmentation for Pathology by Factorizing Foundation Model Features [0.0]
Factorization (F-SEG) is an unsupervised segmentation method for pathology.
It generates segmentation masks from pre-trained deep learning models.
arXiv Detail & Related papers (2024-09-09T15:11:45Z) - GRU-Net: Gaussian Attention Aided Dense Skip Connection Based MultiResUNet for Breast Histopathology Image Segmentation [24.85210810502592]
This paper presents a modified version of MultiResU-Net for histopathology image segmentation.
It is selected as the backbone for its ability to analyze and segment complex features at multiple scales.
We validate our approach on two diverse breast cancer histopathology image datasets.
arXiv Detail & Related papers (2024-06-12T19:17:17Z) - EmerDiff: Emerging Pixel-level Semantic Knowledge in Diffusion Models [52.3015009878545]
We develop an image segmentor capable of generating fine-grained segmentation maps without any additional training.
Our framework identifies semantic correspondences between image pixels and spatial locations of low-dimensional feature maps.
In extensive experiments, the produced segmentation maps are demonstrated to be well delineated and capture detailed parts of the images.
arXiv Detail & Related papers (2024-01-22T07:34:06Z) - Explanations of Classifiers Enhance Medical Image Segmentation via
End-to-end Pre-training [37.11542605885003]
Medical image segmentation aims to identify and locate abnormal structures in medical images, such as chest radiographs, using deep neural networks.
Our work collects explanations from well-trained classifiers to generate pseudo labels of segmentation tasks.
We then use Integrated Gradients (IG) method to distill and boost the explanations obtained from the classifiers, generating massive diagnosis-oriented localization labels (DoLL)
These DoLL-annotated images are used for pre-training the model before fine-tuning it for downstream segmentation tasks, including COVID-19 infectious areas, lungs, heart, and clavicles.
arXiv Detail & Related papers (2024-01-16T16:18:42Z) - ArSDM: Colonoscopy Images Synthesis with Adaptive Refinement Semantic
Diffusion Models [69.9178140563928]
Colonoscopy analysis is essential for assisting clinical diagnosis and treatment.
The scarcity of annotated data limits the effectiveness and generalization of existing methods.
We propose an Adaptive Refinement Semantic Diffusion Model (ArSDM) to generate colonoscopy images that benefit the downstream tasks.
arXiv Detail & Related papers (2023-09-03T07:55:46Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Diffusion Adversarial Representation Learning for Self-supervised Vessel
Segmentation [36.65094442100924]
Vessel segmentation in medical images is one of the important tasks in the diagnosis of vascular diseases and therapy planning.
We introduce a novel diffusion adversarial representation learning (DARL) model that leverages a denoising diffusion probabilistic model with adversarial learning.
Our method significantly outperforms existing unsupervised and self-supervised methods in vessel segmentation.
arXiv Detail & Related papers (2022-09-29T06:06:15Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Learning of Inter-Label Geometric Relationships Using Self-Supervised
Learning: Application To Gleason Grade Segmentation [4.898744396854313]
We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels.
We use a weakly supervised segmentation approach that uses Gleason score to segment the diseased regions.
The resulting segmentation map is used to train a Shape Restoration Network (ShaRe-Net) to predict missing mask segments.
arXiv Detail & Related papers (2021-10-01T13:47:07Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.