Comprehensive Pathological Image Segmentation via Teacher Aggregation for Tumor Microenvironment Analysis
- URL: http://arxiv.org/abs/2501.02909v1
- Date: Mon, 06 Jan 2025 10:33:14 GMT
- Title: Comprehensive Pathological Image Segmentation via Teacher Aggregation for Tumor Microenvironment Analysis
- Authors: Daisuke Komura, Maki Takao, Mieko Ochi, Takumi Onoyama, Hiroto Katoh, Hiroyuki Abe, Hiroyuki Sano, Teppei Konishi, Toshio Kumasaka, Tomoyuki Yokose, Yohei Miyagi, Tetsuo Ushiku, Shumpei Ishikawa,
- Abstract summary: PAGET (Pathological image segmentation via AGgrEgated Teachers) is a new knowledge distillation approach that integrates multiple segmentation models.<n>We demonstrate PAGET's ability to perform rapid, comprehensive TME segmentation across various tissue types and medical institutions.
- Score: 0.15206737182982252
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The tumor microenvironment (TME) plays a crucial role in cancer progression and treatment response, yet current methods for its comprehensive analysis in H&E-stained tissue slides face significant limitations in the diversity of tissue cell types and accuracy. Here, we present PAGET (Pathological image segmentation via AGgrEgated Teachers), a new knowledge distillation approach that integrates multiple segmentation models while considering the hierarchical nature of cell types in the TME. By leveraging a unique dataset created through immunohistochemical restaining techniques and existing segmentation models, PAGET enables simultaneous identification and classification of 14 key TME components. We demonstrate PAGET's ability to perform rapid, comprehensive TME segmentation across various tissue types and medical institutions, advancing the quantitative analysis of tumor microenvironments. This method represents a significant step forward in enhancing our understanding of cancer biology and supporting precise clinical decision-making from large-scale histopathology images.
Related papers
- Leveraging Pathology Foundation Models for Panoptic Segmentation of Melanoma in H&E Images [4.058897726957504]
We propose a novel deep learning network for the segmentation of five tissue classes in melanoma H&E images.<n>Our approach leverages Virchow2, a pathology foundation model trained on 3.1 million histopathology images as a feature extractor.<n>The proposed model achieved first place in the tissue segmentation task of the PUMA Grand Challenge, demonstrating robust performance and generalizability.
arXiv Detail & Related papers (2025-07-18T14:38:25Z) - PAST: A multimodal single-cell foundation model for histopathology and spatial transcriptomics in cancer [26.795192024462963]
PAST is a pan-cancer single-cell foundation model trained on 20 million paired histopathology images and single-cell transcriptomes.<n>It predicts single-cell gene expression, virtual molecular staining, and multimodal survival analysis directly from routine pathology slides.<n>Our work establishes a new paradigm for pathology foundation models, providing a versatile tool for high-resolution spatial omics, mechanistic discovery, and precision cancer research.
arXiv Detail & Related papers (2025-07-08T21:51:25Z) - AI Assisted Cervical Cancer Screening for Cytology Samples in Developing Countries [0.18472148461613155]
Cervical cancer remains a significant health challenge, with high incidence and mortality rates.
Conventional Liquid-Based Cytology(LBC) is a labor-intensive process, requires expert pathologists and is highly prone to errors.
This paper introduces an innovative approach that integrates low-cost biological microscopes with our simple and efficient AI algorithms for automated whole-slide analysis.
arXiv Detail & Related papers (2025-04-29T05:18:59Z) - MAST-Pro: Dynamic Mixture-of-Experts for Adaptive Segmentation of Pan-Tumors with Knowledge-Driven Prompts [54.915060471994686]
We propose MAST-Pro, a novel framework that integrates dynamic Mixture-of-Experts (D-MoE) and knowledge-driven prompts for pan-tumor segmentation.
Specifically, text and anatomical prompts provide domain-specific priors guiding tumor representation learning, while D-MoE dynamically selects experts to balance generic and tumor-specific feature learning.
Experiments on multi-anatomical tumor datasets demonstrate that MAST-Pro outperforms state-of-the-art approaches, achieving up to a 5.20% improvement in average improvement while reducing trainable parameters by 91.04%, without compromising accuracy.
arXiv Detail & Related papers (2025-03-18T15:39:44Z) - Dynamically evolving segment anything model with continuous learning for medical image segmentation [50.92344083895528]
We introduce EvoSAM, a dynamically evolving medical image segmentation model.
EvoSAM continuously accumulates new knowledge from an ever-expanding array of scenarios and tasks.
Experiments conducted by surgical clinicians on blood vessel segmentation confirm that EvoSAM enhances segmentation efficiency based on user prompts.
arXiv Detail & Related papers (2025-03-08T14:37:52Z) - TSEML: A task-specific embedding-based method for few-shot classification of cancer molecular subtypes [4.815808233338459]
We focus on the few-shot molecular subtype prediction problem in heterogeneous and small cancer datasets.
We introduce a task-specific embedding-based meta-learning framework (TSEML)
Our framework achieves superior performance in addressing the problem of few-shot molecular subtype classification.
arXiv Detail & Related papers (2024-12-17T11:30:54Z) - Enhanced MRI Representation via Cross-series Masking [48.09478307927716]
Cross-Series Masking (CSM) Strategy for effectively learning MRI representation in a self-supervised manner.<n>Method achieves state-of-the-art performance on both public and in-house datasets.
arXiv Detail & Related papers (2024-12-10T10:32:09Z) - Multimodal Cross-Task Interaction for Survival Analysis in Whole Slide Pathological Images [10.996711454572331]
Survival prediction, utilizing pathological images and genomic profiles, is increasingly important in cancer analysis and prognosis.
Existing multimodal methods often rely on alignment strategies to integrate complementary information.
We propose a Multimodal Cross-Task Interaction (MCTI) framework to explore the intrinsic correlations between subtype classification and survival analysis tasks.
arXiv Detail & Related papers (2024-06-25T02:18:35Z) - Genomics-guided Representation Learning for Pathologic Pan-cancer Tumor Microenvironment Subtype Prediction [7.502459517962686]
We propose PathoTME, a genomics-guided representation learning framework employing Whole Slide Image (WSI) for pan-cancer TME subtypes prediction.
Our model achieves better performance than other state-of-the-art methods across 23 cancer types on TCGA dataset.
arXiv Detail & Related papers (2024-06-10T17:56:21Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - AG-CRC: Anatomy-Guided Colorectal Cancer Segmentation in CT with
Imperfect Anatomical Knowledge [9.961742312147674]
We develop a novel Anatomy-Guided segmentation framework to exploit the auto-generated organ masks.
We extensively evaluate the proposed method on two CRC segmentation datasets.
arXiv Detail & Related papers (2023-10-07T03:22:06Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Weakly supervised multiple instance learning histopathological tumor
segmentation [51.085268272912415]
We propose a weakly supervised framework for whole slide imaging segmentation.
We exploit a multiple instance learning scheme for training models.
The proposed framework has been evaluated on multi-locations and multi-centric public data from The Cancer Genome Atlas and the PatchCamelyon dataset.
arXiv Detail & Related papers (2020-04-10T13:12:47Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.