Hide-and-Seek Attribution: Weakly Supervised Segmentation of Vertebral Metastases in CT
- URL: http://arxiv.org/abs/2512.06849v1
- Date: Sun, 07 Dec 2025 14:03:28 GMT
- Title: Hide-and-Seek Attribution: Weakly Supervised Segmentation of Vertebral Metastases in CT
- Authors: Matan Atad, Alexander W. Marka, Lisa Steinhelfer, Anna Curto-Vilalta, Yannik Leonhardt, Sarah C. Foreman, Anna-Sophia Walburga Dietrich, Robert Graf, Alexandra S. Gersing, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke, Hendrik Möller,
- Abstract summary: We introduce a weakly supervised method trained solely on vertebra-level healthy/malignant labels, without any lesion masks.<n>We achieve strong blastic/lytic performance despite no mask supervision.
- Score: 68.09387763135236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate segmentation of vertebral metastasis in CT is clinically important yet difficult to scale, as voxel-level annotations are scarce and both lytic and blastic lesions often resemble benign degenerative changes. We introduce a weakly supervised method trained solely on vertebra-level healthy/malignant labels, without any lesion masks. The method combines a Diffusion Autoencoder (DAE) that produces a classifier-guided healthy edit of each vertebra with pixel-wise difference maps that propose candidate lesion regions. To determine which regions truly reflect malignancy, we introduce Hide-and-Seek Attribution: each candidate is revealed in turn while all others are hidden, the edited image is projected back to the data manifold by the DAE, and a latent-space classifier quantifies the isolated malignant contribution of that component. High-scoring regions form the final lytic or blastic segmentation. On held-out radiologist annotations, we achieve strong blastic/lytic performance despite no mask supervision (F1: 0.91/0.85; Dice: 0.87/0.78), exceeding baselines (F1: 0.79/0.67; Dice: 0.74/0.55). These results show that vertebra-level labels can be transformed into reliable lesion masks, demonstrating that generative editing combined with selective occlusion supports accurate weakly supervised segmentation in CT.
Related papers
- Autonomous labeling of surgical resection margins using a foundation model [4.873604837915161]
We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images.<n>VIN uses a frozen foundation model as the feature extractor and a compact two-layer multilayer perceptron trained for patch-level classification of cautery-consistent features.
arXiv Detail & Related papers (2025-11-27T05:52:42Z) - Transformer Classification of Breast Lesions: The BreastDCEDL_AMBL Benchmark Dataset and 0.92 AUC Baseline [1.9336815376402718]
This study introduces a transformer-based framework for automated classification of breast lesions in dynamic contrast-enhanced MRI.<n>We implemented a SegFormer architecture that achieved an AUC of 0.92 for lesion-level classification, with 100% sensitivity and 67% specificity at the patient level.<n>Public release of the dataset, models, and evaluation protocols provides the first standardized benchmark for DCE-MRI lesion classification.
arXiv Detail & Related papers (2025-09-30T15:58:02Z) - SD-RetinaNet: Topologically Constrained Semi-Supervised Retinal Lesion and Layer Segmentation in OCT [5.409364353574134]
We propose a novel semi-supervised model that introduces a fully differentiable biomarker topology engine.<n>Our model learns a disentangled representation, separating spatial and style factors.<n>We evaluate the proposed model on public and internal datasets of OCT scans and show that it outperforms the current state-of-the-art in both lesion and layer segmentation.
arXiv Detail & Related papers (2025-09-25T07:56:38Z) - Uncertainty-Guided Coarse-to-Fine Tumor Segmentation with Anatomy-Aware Post-Processing [12.163563962576587]
Reliable tumor segmentation in thoracic computed tomography (CT) remains challenging due to boundary ambiguity, class imbalance, and anatomical variability.<n>We propose an uncertainty-guided, coarse-to-fine segmentation framework that combines full-volume tumor localization with refined region-of-interest (ROI) segmentation.<n>Experiments on private and public datasets demonstrate improvements in Dice and Hausdorff scores, with fewer false positives and enhanced spatial interpretability.
arXiv Detail & Related papers (2025-04-16T16:08:38Z) - Shape Matters: Detecting Vertebral Fractures Using Differentiable
Point-Based Shape Decoding [51.38395069380457]
Degenerative spinal pathologies are highly prevalent among the elderly population.
Timely diagnosis of osteoporotic fractures and other degenerative deformities facilitates proactive measures to mitigate the risk of severe back pain and disability.
In this study, we specifically explore the use of shape auto-encoders for vertebrae.
arXiv Detail & Related papers (2023-12-08T18:11:22Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - Edge-competing Pathological Liver Vessel Segmentation with Limited
Labels [61.38846803229023]
There is no algorithm as yet tailored for the MVI detection from pathological images.
This paper collects the first pathological liver image dataset containing 522 whole slide images with labels of vessels, MVI, and carcinoma grades.
We propose an Edge-competing Vessel Network (EVS-Net) which contains a segmentation network and two edge segmentation discriminators.
arXiv Detail & Related papers (2021-08-01T07:28:32Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z) - Leveraging SLIC Superpixel Segmentation and Cascaded Ensemble SVM for
Fully Automated Mass Detection In Mammograms [1.7205106391379026]
This paper proposes a rigorous segmentation method, supported by morphological enhancement using grayscale linear filters.
A novel cascaded ensemble of support vector machines (SVM) is used to effectively tackle the class imbalance and provide significant predictions.
arXiv Detail & Related papers (2020-10-20T15:02:25Z) - Weakly-Supervised Lesion Segmentation on CT Scans using Co-Segmentation [18.58056402884405]
Lesion segmentation on computed tomography (CT) scans is an important step for precisely monitoring changes in lesion/tumor growth.
Current practices rely on an imprecise substitute called response evaluation criteria in solid tumors.
This paper proposes a convolutional neural network based weakly-supervised lesion segmentation method.
arXiv Detail & Related papers (2020-01-23T15:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.