3D Universal Lesion Detection and Tagging in CT with Self-Training
- URL: http://arxiv.org/abs/2504.05201v1
- Date: Mon, 07 Apr 2025 15:50:27 GMT
- Title: 3D Universal Lesion Detection and Tagging in CT with Self-Training
- Authors: Jared Frazier, Tejas Sudharshan Mathai, Jianfei Liu, Angshuman Paul, Ronald M. Summers,
- Abstract summary: We propose a self-training pipeline to detect 3D lesions and tag them according to the body part they occur in.<n>To our knowledge, we are the first to jointly detect lesions in 3D and tag them according to the body part label.
- Score: 3.68620908362189
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Radiologists routinely perform the tedious task of lesion localization, classification, and size measurement in computed tomography (CT) studies. Universal lesion detection and tagging (ULDT) can simultaneously help alleviate the cumbersome nature of lesion measurement and enable tumor burden assessment. Previous ULDT approaches utilize the publicly available DeepLesion dataset, however it does not provide the full volumetric (3D) extent of lesions and also displays a severe class imbalance. In this work, we propose a self-training pipeline to detect 3D lesions and tag them according to the body part they occur in. We used a significantly limited 30\% subset of DeepLesion to train a VFNet model for 2D lesion detection and tagging. Next, the 2D lesion context was expanded into 3D, and the mined 3D lesion proposals were integrated back into the baseline training data in order to retrain the model over multiple rounds. Through the self-training procedure, our VFNet model learned from its own predictions, detected lesions in 3D, and tagged them. Our results indicated that our VFNet model achieved an average sensitivity of 46.9\% at [0.125:8] false positives (FP) with a limited 30\% data subset in comparison to the 46.8\% of an existing approach that used the entire DeepLesion dataset. To our knowledge, we are the first to jointly detect lesions in 3D and tag them according to the body part label.
Related papers
- Weakly-Supervised Detection of Bone Lesions in CT [48.34559062736031]
The skeletal region is one of the common sites of metastatic spread of cancer in the breast and prostate.
We developed a pipeline to detect bone lesions in CT volumes via a proxy segmentation task.
Our method detected bone lesions in CT with a precision of 96.7% and recall of 47.3% despite the use of incomplete and partial training data.
arXiv Detail & Related papers (2024-01-31T21:05:34Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - 3D unsupervised anomaly detection and localization through virtual
multi-view projection and reconstruction: Clinical validation on low-dose
chest computed tomography [2.2302915692528367]
We propose a method based on a deep neural network for computer-aided diagnosis called virtual multi-view projection and reconstruction.
The proposed method improves the patient-level anomaly detection by 10% compared with a gold standard based on supervised learning.
It localizes the anomaly region with 93% accuracy, demonstrating its high performance.
arXiv Detail & Related papers (2022-06-18T13:22:00Z) - Unsupervised Anomaly Detection in 3D Brain MRI using Deep Learning with
impured training data [53.122045119395594]
We study how unhealthy samples within the training data affect anomaly detection performance for brain MRI-scans.
We evaluate a method to identify falsely labeled samples directly during training based on the reconstruction error of the AE.
arXiv Detail & Related papers (2022-04-12T13:05:18Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Deep Lesion Tracker: Monitoring Lesions in 4D Longitudinal Imaging
Studies [19.890200389017213]
Deep lesion tracker (DLT) is a deep learning approach that uses both appearance- and anatomical-based signals.
We release the first lesion tracking benchmark, consisting of 3891 lesion pairs from the public DeepLesion database.
DLT generalizes well on an external clinical test set of 100 longitudinal studies, achieving 88% accuracy.
arXiv Detail & Related papers (2020-12-09T05:23:46Z) - Deep Volumetric Universal Lesion Detection using Light-Weight Pseudo 3D
Convolution and Surface Point Regression [23.916776570010285]
Computer-aided lesion/significant-findings detection techniques are at the core of medical imaging.
We propose a novel deep anchor-free one-stage VULD framework that incorporates (1) P3DC operators to recycle the architectural configurations and pre-trained weights from the off-the-shelf 2D networks.
New SPR method to effectively regress the 3D lesion spatial extents by pinpointing their representative key points on lesion surfaces.
arXiv Detail & Related papers (2020-08-30T19:42:06Z) - Volumetric Attention for 3D Medical Image Segmentation and Detection [53.041572035020344]
A volumetric attention(VA) module for 3D medical image segmentation and detection is proposed.
VA attention is inspired by recent advances in video processing, enables 2.5D networks to leverage context information along the z direction.
Its integration in the Mask R-CNN is shown to enable state-of-the-art performance on the Liver Tumor (LiTS) Challenge.
arXiv Detail & Related papers (2020-04-04T18:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.