Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation
- URL: http://arxiv.org/abs/2506.17159v1
- Date: Fri, 20 Jun 2025 17:05:09 GMT
- Title: Co-Seg++: Mutual Prompt-Guided Collaborative Learning for Versatile Medical Segmentation
- Authors: Qing Xu, Yuxiang Luo, Wenting Duan, Zhen Chen,
- Abstract summary: Medical image analysis is critical yet challenged by the need of jointly segmenting organs or tissues.<n>Existing studies typically formulated different segmentation tasks in isolation.<n>We propose a Co-Seg++ framework for versatile medical segmentation.
- Score: 4.584473085778697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image analysis is critical yet challenged by the need of jointly segmenting organs or tissues, and numerous instances for anatomical structures and tumor microenvironment analysis. Existing studies typically formulated different segmentation tasks in isolation, which overlooks the fundamental interdependencies between these tasks, leading to suboptimal segmentation performance and insufficient medical image understanding. To address this issue, we propose a Co-Seg++ framework for versatile medical segmentation. Specifically, we introduce a novel co-segmentation paradigm, allowing semantic and instance segmentation tasks to mutually enhance each other. We first devise a spatio-temporal prompt encoder (STP-Encoder) to capture long-range spatial and temporal relationships between segmentation regions and image embeddings as prior spatial constraints. Moreover, we devise a multi-task collaborative decoder (MTC-Decoder) that leverages cross-guidance to strengthen the contextual consistency of both tasks, jointly computing semantic and instance segmentation masks. Extensive experiments on diverse CT and histopathology datasets demonstrate that the proposed Co-Seg++ outperforms state-of-the-arts in the semantic, instance, and panoptic segmentation of dental anatomical structures, histopathology tissues, and nuclei instances. The source code is available at https://github.com/xq141839/Co-Seg-Plus.
Related papers
- Organ-aware Multi-scale Medical Image Segmentation Using Text Prompt Engineering [17.273290949721975]
Existing medical image segmentation methods rely on uni-modal visual inputs, such as images or videos, requiring labor-intensive manual annotations.<n>Medical imaging techniques capture multiple intertwined organs within a single scan, further complicating segmentation accuracy.<n>To address these challenges, MedSAM was developed to enhance segmentation accuracy by integrating image features with user-provided prompts.
arXiv Detail & Related papers (2025-03-18T01:35:34Z) - Image Segmentation in Foundation Model Era: A Survey [95.60054312319939]
Current research in image segmentation lacks a detailed analysis of distinct characteristics, challenges, and solutions.<n>This survey seeks to fill this gap by providing a thorough review of cutting-edge research centered around FM-driven image segmentation.<n>An exhaustive overview of over 300 segmentation approaches is provided to encapsulate the breadth of current research efforts.
arXiv Detail & Related papers (2024-08-23T10:07:59Z) - CAT: Coordinating Anatomical-Textual Prompts for Multi-Organ and Tumor Segmentation [11.087654014615955]
We introduce CAT, an innovative model that Coordinates Anatomical prompts derived from 3D cropped images with Textual prompts enriched by medical domain knowledge.
Trained on a consortium of 10 public CT datasets, CAT demonstrates superior performance in multiple segmentation tasks.
This approach confirms that coordinating multimodal prompts is a promising avenue for addressing complex scenarios in the medical domain.
arXiv Detail & Related papers (2024-06-11T09:22:39Z) - Teaching AI the Anatomy Behind the Scan: Addressing Anatomical Flaws in Medical Image Segmentation with Learnable Prior [34.54360931760496]
Key anatomical features, such as the number of organs, their shapes and relative positions, are crucial for building a robust multi-organ segmentation model.
We introduce a novel architecture called the Anatomy-Informed Network (AIC-Net)
AIC-Net incorporates a learnable input termed "Anatomical Prior", which can be adapted to patient-specific anatomy.
arXiv Detail & Related papers (2024-03-27T10:46:24Z) - Segment Everything Everywhere All at Once [124.90835636901096]
We present SEEM, a promptable and interactive model for segmenting everything everywhere all at once in an image.
We propose a novel decoding mechanism that enables diverse prompting for all types of segmentation tasks.
We conduct a comprehensive empirical study to validate the effectiveness of SEEM across diverse segmentation tasks.
arXiv Detail & Related papers (2023-04-13T17:59:40Z) - Implicit Anatomical Rendering for Medical Image Segmentation with
Stochastic Experts [11.007092387379078]
We propose MORSE, a generic implicit neural rendering framework designed at an anatomical level to assist learning in medical image segmentation.
Our approach is to formulate medical image segmentation as a rendering problem in an end-to-end manner.
Our experiments demonstrate that MORSE can work well with different medical segmentation backbones.
arXiv Detail & Related papers (2023-04-06T16:44:03Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Generalized Organ Segmentation by Imitating One-shot Reasoning using
Anatomical Correlation [55.1248480381153]
We propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes.
We show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task.
arXiv Detail & Related papers (2021-03-30T13:41:12Z) - Spatially Dependent U-Nets: Highly Accurate Architectures for Medical
Imaging Segmentation [10.77039660100327]
We introduce a novel deep neural network architecture that exploits the inherent spatial coherence of anatomical structures.
Our approach is well equipped to capture long-range spatial dependencies in the segmented pixel/voxel space.
Our method performs favourably to commonly used U-Net and U-Net++ architectures.
arXiv Detail & Related papers (2021-03-22T10:37:20Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.