Unit-Based Histopathology Tissue Segmentation via Multi-Level Feature Representation
- URL: http://arxiv.org/abs/2507.12427v1
- Date: Wed, 16 Jul 2025 17:15:18 GMT
- Title: Unit-Based Histopathology Tissue Segmentation via Multi-Level Feature Representation
- Authors: Ashkan Shakarami, Azade Farshad, Yousef Yeganeh, Lorenzo Nicole, Peter Schuffler, Stefano Ghidoni, Nassir Navab,
- Abstract summary: UTS is a unit-based tissue segmentation framework for histopathology.<n>It classifies each fixed-size 32 * 32 tile, rather than each pixel, as the segmentation unit.<n>It supports clinically relevant tasks such as tumor-stroma quantification and surgical margin assessment.
- Score: 36.53156355717765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose UTS, a unit-based tissue segmentation framework for histopathology that classifies each fixed-size 32 * 32 tile, rather than each pixel, as the segmentation unit. This approach reduces annotation effort and improves computational efficiency without compromising accuracy. To implement this approach, we introduce a Multi-Level Vision Transformer (L-ViT), which benefits the multi-level feature representation to capture both fine-grained morphology and global tissue context. Trained to segment breast tissue into three categories (infiltrating tumor, non-neoplastic stroma, and fat), UTS supports clinically relevant tasks such as tumor-stroma quantification and surgical margin assessment. Evaluated on 386,371 tiles from 459 H&E-stained regions, it outperforms U-Net variants and transformer-based baselines. Code and Dataset will be available at GitHub.
Related papers
- CAFCT-Net: A CNN-Transformer Hybrid Network with Contextual and Attentional Feature Fusion for Liver Tumor Segmentation [3.8952128960495638]
We propose a Contextual and Attentional feature Fusions enhanced Convolutional Network (CNN) and Transformer hybrid network (CAFCT-Net) for liver tumor segmentation.
Experimental results show that the proposed model achieves a mean Intersection of 76.54% and Dice coefficient of 84.29%, respectively.
arXiv Detail & Related papers (2024-01-30T10:42:11Z) - CIS-UNet: Multi-Class Segmentation of the Aorta in Computed Tomography
Angiography via Context-Aware Shifted Window Self-Attention [10.335899694123711]
We introduce Context Infused Swin-UNet (CIS-UNet), a deep learning model for aortic segmentation.
CIS-UNet adopts a hierarchical encoder-decoder structure comprising a CNN encoder, symmetric decoder, skip connections, and a novel Context-aware Shifted Window Self-Attention (CSW-SA) as the bottleneck block.
We trained our model on computed tomography (CT) scans from 44 patients and tested it on 15 patients. CIS-UNet outperformed the state-of-the-art SwinUNetR segmentation model, by achieving a superior mean Dice coefficient of 0.713 compared
arXiv Detail & Related papers (2024-01-23T19:17:20Z) - Automated 3D Tumor Segmentation using Temporal Cubic PatchGAN (TCuP-GAN) [0.276240219662896]
Temporal Cubic PatchGAN (TCuP-GAN) is a volume-to-volume translational model that marries the concepts of a generative feature learning framework with Convolutional Long Short-Term Memory Networks (LSTMs)
We demonstrate the capabilities of our TCuP-GAN on the data from four segmentation challenges (Adult Glioma, Meningioma, Pediatric Tumors, and Sub-Saharan Africa)
We demonstrate the successful learning of our framework to predict robust multi-class segmentation masks across all the challenges.
arXiv Detail & Related papers (2023-11-23T18:37:26Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - UNesT: Local Spatial Representation Learning with Hierarchical
Transformer for Efficient Medical Segmentation [29.287521185541298]
We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency.
We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency.
arXiv Detail & Related papers (2022-09-28T19:14:38Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Classifying Breast Histopathology Images with a Ductal Instance-Oriented
Pipeline [10.605775819074886]
The duct-level segmenter tries to identify each ductal individual inside a microscopic image.
It then extracts tissue-level information from the identified ductal instances.
The proposed DIOP only takes a few seconds to run in the inference time.
arXiv Detail & Related papers (2020-12-11T05:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.