Promptable segmentation with region exploration enables minimal-effort expert-level prostate cancer delineation
- URL: http://arxiv.org/abs/2602.17813v1
- Date: Thu, 19 Feb 2026 20:29:41 GMT
- Title: Promptable segmentation with region exploration enables minimal-effort expert-level prostate cancer delineation
- Authors: Junqing Yang, Natasha Thorley, Ahmed Nadeem Abbasi, Shonit Punwani, Zion Tse, Yipeng Hu, Shaheer U. Saeed,
- Abstract summary: This work aims to bridge the gap between automated and manual segmentation through a framework driven by user-provided point prompts.<n>The framework was evaluated on two public prostate MR datasets (PROMIS and PICAI, with 566 and 1090 cases)
- Score: 2.056389633034677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: Accurate segmentation of prostate cancer on magnetic resonance (MR) images is crucial for planning image-guided interventions such as targeted biopsies, cryoablation, and radiotherapy. However, subtle and variable tumour appearances, differences in imaging protocols, and limited expert availability make consistent interpretation difficult. While automated methods aim to address this, they rely on large expertly-annotated datasets that are often inconsistent, whereas manual delineation remains labour-intensive. This work aims to bridge the gap between automated and manual segmentation through a framework driven by user-provided point prompts, enabling accurate segmentation with minimal annotation effort. Methods: The framework combines reinforcement learning (RL) with a region-growing segmentation process guided by user prompts. Starting from an initial point prompt, region-growing generates a preliminary segmentation, which is iteratively refined through RL. At each step, the RL agent observes the image and current segmentation to predict a new point, from which region growing updates the mask. A reward, balancing segmentation accuracy and voxel-wise uncertainty, encourages exploration of ambiguous regions, allowing the agent to escape local optima and perform sample-specific optimisation. Despite requiring fully supervised training, the framework bridges manual and fully automated segmentation at inference by substantially reducing user effort while outperforming current fully automated methods. Results: The framework was evaluated on two public prostate MR datasets (PROMIS and PICAI, with 566 and 1090 cases). It outperformed the previous best automated methods by 9.9% and 8.9%, respectively, with performance comparable to manual radiologist segmentation, reducing annotation time tenfold.
Related papers
- Resource-efficient Automatic Refinement of Segmentations via Weak Supervision from Light Feedback [1.8082075562656847]
We present SCORE, a weakly supervised framework that learns to refine mask predictions only using light feedback during training.<n>We demonstrate SCORE on humerus CT scans, where it considerably improves initial predictions and achieves performance on par with existing refinement methods.
arXiv Detail & Related papers (2025-11-04T13:53:10Z) - CLAPS: A CLIP-Unified Auto-Prompt Segmentation for Multi-Modal Retinal Imaging [47.04292769940597]
We propose CLIP-unified Auto-Prompt (CLAPS), a novel method for unified segmentation across diverse tasks and modalities in retinal imaging.<n>Our approach begins by pre-training a CLIP-based image encoder on a large, multi-modal retinal dataset.<n>To unify tasks and resolve ambiguity, we use text prompts enhanced with a unique "modality signature" for each imaging modality.
arXiv Detail & Related papers (2025-09-10T14:14:49Z) - MyGO: Make your Goals Obvious, Avoiding Semantic Confusion in Prostate Cancer Lesion Region Segmentation [14.346163388200148]
We propose a novel Pixel Anchor Module, which guides the model to discover a sparse set of feature anchors.<n>This mechanism enhances the model's nonlinear representation capacity and improves segmentation accuracy within lesion regions.<n>Our method achieves state-of-the-art performance on the PI-CAI dataset, demonstrating 69.73% IoU and 74.32% Dice scores.
arXiv Detail & Related papers (2025-07-23T07:10:07Z) - Promptable cancer segmentation using minimal expert-curated data [5.097733221827974]
Automated segmentation of cancer on medical images can aid targeted diagnostic and therapeutic procedures.<n>Its adoption is limited by the high cost of expert annotations required for training and inter-observer variability in datasets.<n>We propose a novel approach for promptable segmentation requiring only 24 fully-segmented images, supplemented by 8 weakly-labelled images.
arXiv Detail & Related papers (2025-05-23T13:56:40Z) - Flip Learning: Weakly Supervised Erase to Segment Nodules in Breast Ultrasound [40.97115667616978]
We introduce a novel learning-based WSS framework called Flip Learning, which relies solely on 2D/3D boxes for accurate segmentation.<n>Multiple agents are employed to erase the target from the box to facilitate classification tag flipping, with the erased region serving as the predicted segmentation mask.<n>Our method outperforms state-of-the-art WSS methods and foundation models, and achieves comparable performance as fully-supervised learning algorithms.
arXiv Detail & Related papers (2025-03-26T16:20:02Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Generative Adversarial Networks for Weakly Supervised Generation and Evaluation of Brain Tumor Segmentations on MR Images [0.0]
This work presents a weakly supervised approach to segment anomalies in 2D magnetic resonance images.
We train a generative adversarial network (GAN) that converts cancerous images to healthy variants.
Non-cancerous variants can also be used to evaluate the segmentations in a weakly supervised fashion.
arXiv Detail & Related papers (2022-11-10T00:04:46Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Weakly-Supervised Universal Lesion Segmentation with Regional Level Set
Loss [16.80758525711538]
We present a novel weakly-supervised universal lesion segmentation method based on the High-Resolution Network (HRNet)
AHRNet provides advanced high-resolution deep image features by involving a decoder, dual-attention and scale attention mechanisms.
Our method achieves the best performance on the publicly large-scale DeepLesion dataset and a hold-out test set.
arXiv Detail & Related papers (2021-05-03T23:33:37Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Automatic Data Augmentation via Deep Reinforcement Learning for
Effective Kidney Tumor Segmentation [57.78765460295249]
We develop a novel automatic learning-based data augmentation method for medical image segmentation.
In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss.
We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.
arXiv Detail & Related papers (2020-02-22T14:10:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.