SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours
- URL: http://arxiv.org/abs/2505.18989v1
- Date: Sun, 25 May 2025 06:14:41 GMT
- Title: SPARS: Self-Play Adversarial Reinforcement Learning for Segmentation of Liver Tumours
- Authors: Catalina Tan, Yipeng Hu, Shaheer U. Saeed,
- Abstract summary: Fully-supervised machine learning models aim to automate localisation tasks.<n>They require a large number of costly and often subjective 3D voxel-level labels for training.<n>We propose a novel weakly-supervised machine learning framework called SPARS.<n>It uses object-level binary cancer presence labels to localise cancerous regions on CT scans.
- Score: 2.5229190642019286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate tumour segmentation is vital for various targeted diagnostic and therapeutic procedures for cancer, e.g., planning biopsies or tumour ablations. Manual delineation is extremely labour-intensive, requiring substantial expert time. Fully-supervised machine learning models aim to automate such localisation tasks, but require a large number of costly and often subjective 3D voxel-level labels for training. The high-variance and subjectivity in such labels impacts model generalisability, even when large datasets are available. Histopathology labels may offer more objective labels but the infeasibility of acquiring pixel-level annotations to develop tumour localisation methods based on histology remains challenging in-vivo. In this work, we propose a novel weakly-supervised semantic segmentation framework called SPARS (Self-Play Adversarial Reinforcement Learning for Segmentation), which utilises an object presence classifier, trained on a small number of image-level binary cancer presence labels, to localise cancerous regions on CT scans. Such binary labels of patient-level cancer presence can be sourced more feasibly from biopsies and histopathology reports, enabling a more objective cancer localisation on medical images. Evaluating with real patient data, we observed that SPARS yielded a mean dice score of $77.3 \pm 9.4$, which outperformed other weakly-supervised methods by large margins. This performance was comparable with recent fully-supervised methods that require voxel-level annotations. Our results demonstrate the potential of using SPARS to reduce the need for extensive human-annotated labels to detect cancer in real-world healthcare settings.
Related papers
- Iterative pseudo-labeling based adaptive copy-paste supervision for semi-supervised tumor segmentation [25.905770074627174]
iterative pseudo-labeling based adaptive copy-paste supervision (IPA-CP) for tumor segmentation in CT scans.<n> IPA-CP incorporates a two-way uncertainty based adaptive augmentation mechanism, aiming to inject tumor uncertainties into adaptive augmentation.<n>Experiments on both in-house and public datasets show that our framework outperforms state-of-the-art SSL methods in medical image segmentation.
arXiv Detail & Related papers (2025-08-06T03:12:30Z) - Promptable cancer segmentation using minimal expert-curated data [5.097733221827974]
Automated segmentation of cancer on medical images can aid targeted diagnostic and therapeutic procedures.<n>Its adoption is limited by the high cost of expert annotations required for training and inter-observer variability in datasets.<n>We propose a novel approach for promptable segmentation requiring only 24 fully-segmented images, supplemented by 8 weakly-labelled images.
arXiv Detail & Related papers (2025-05-23T13:56:40Z) - Boosting Medical Image-based Cancer Detection via Text-guided Supervision from Reports [68.39938936308023]
We propose a novel text-guided learning method to achieve highly accurate cancer detection results.
Our approach can leverage clinical knowledge by large-scale pre-trained VLM to enhance generalization ability.
arXiv Detail & Related papers (2024-05-23T07:03:38Z) - Weakly-supervised positional contrastive learning: application to
cirrhosis classification [45.63061034568991]
Large medical imaging datasets can be cheaply annotated with low-confidence, weak labels.
Access to high-confidence labels, such as histology-based diagnoses, is rare and costly.
We propose an efficient weakly-supervised positional (WSP) contrastive learning strategy.
arXiv Detail & Related papers (2023-07-10T15:02:13Z) - Analysing the effectiveness of a generative model for semi-supervised
medical image segmentation [23.898954721893855]
State-of-the-art in automated segmentation remains supervised learning, employing discriminative models such as U-Net.
Semi-supervised learning (SSL) attempts to leverage the abundance of unlabelled data to obtain more robust and reliable models.
Deep generative models such as the SemanticGAN are truly viable alternatives to tackle challenging medical image segmentation problems.
arXiv Detail & Related papers (2022-11-03T15:19:59Z) - WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic
Segmentation for Lung Adenocarcinoma [51.50991881342181]
This challenge includes 10,091 patch-level annotations and over 130 million labeled pixels.
First place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919)
arXiv Detail & Related papers (2022-04-13T15:27:05Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - Weakly-supervised High-resolution Segmentation of Mammography Images for
Breast Cancer Diagnosis [17.936019428281586]
In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output.
We introduce a novel neural network architecture to perform weakly-supervised segmentation of high-resolution images.
We apply this model to breast cancer diagnosis with screening mammography, and validate it on a large clinically-realistic dataset.
arXiv Detail & Related papers (2021-06-13T17:25:21Z) - Deep Semi-supervised Metric Learning with Dual Alignment for Cervical
Cancer Cell Detection [49.78612417406883]
We propose a novel semi-supervised deep metric learning method for cervical cancer cell detection.
Our model learns an embedding metric space and conducts dual alignment of semantic features on both the proposal and prototype levels.
We construct a large-scale dataset for semi-supervised cervical cancer cell detection for the first time, consisting of 240,860 cervical cell images.
arXiv Detail & Related papers (2021-04-07T17:11:27Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.