Multimodal Interactive Lung Lesion Segmentation: A Framework for
Annotating PET/CT Images based on Physiological and Anatomical Cues
- URL: http://arxiv.org/abs/2301.09914v1
- Date: Tue, 24 Jan 2023 10:50:45 GMT
- Title: Multimodal Interactive Lung Lesion Segmentation: A Framework for
Annotating PET/CT Images based on Physiological and Anatomical Cues
- Authors: Verena Jasmin Hallitschke, Tobias Schlumberger, Philipp Kataliakos,
Zdravko Marinov, Moon Kim, Lars Heiliger, Constantin Seibold, Jens Kleesiek,
Rainer Stiefelhagen
- Abstract summary: Deep learning has enabled the accurate segmentation of various diseases in medical imaging.
These performances, however, typically demand large amounts of manual voxel annotations.
We propose a multimodal interactive segmentation framework that mitigates these issues by combining anatomical and physiological cues from PET/CT data.
- Score: 16.159693927845975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, deep learning enabled the accurate segmentation of various diseases
in medical imaging. These performances, however, typically demand large amounts
of manual voxel annotations. This tedious process for volumetric data becomes
more complex when not all required information is available in a single imaging
domain as is the case for PET/CT data. We propose a multimodal interactive
segmentation framework that mitigates these issues by combining anatomical and
physiological cues from PET/CT data. Our framework utilizes the geodesic
distance transform to represent the user annotations and we implement a novel
ellipsoid-based user simulation scheme during training. We further propose two
annotation interfaces and conduct a user study to estimate their usability. We
evaluated our model on the in-domain validation dataset and an unseen PET/CT
dataset. We make our code publicly available:
https://github.com/verena-hallitschke/pet-ct-annotate.
Related papers
- Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT [4.376648893167674]
The autoPET III Challenge focuses on advancing automated segmentation of tumor lesions in PET/CT images.
We developed a classifier that identifies the tracer of the given PET/CT based on the Maximum Intensity Projection of the PET scan.
Our final submission achieves cross-validation Dice scores of 76.90% and 61.33% for the publicly available FDG and PSMA datasets.
arXiv Detail & Related papers (2024-09-18T17:16:57Z) - From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - AutoPET Challenge: Tumour Synthesis for Data Augmentation [26.236831356731017]
We adapt the DiffTumor method, originally designed for CT images, to generate synthetic PET-CT images with lesions.
Our approach trains the generative model on the AutoPET dataset and uses it to expand the training data.
Our findings show that the model trained on the augmented dataset achieves a higher Dice score, demonstrating the potential of our data augmentation approach.
arXiv Detail & Related papers (2024-09-12T14:23:19Z) - Sliding Window FastEdit: A Framework for Lesion Annotation in Whole-body
PET Images [24.7560446107659]
Deep learning has revolutionized the accurate segmentation of diseases in medical imaging.
This requirement presents a challenge for whole-body Positron Emission Tomography (PET) imaging, where lesions are scattered throughout the body.
We introduce SW-FastEdit - an interactive segmentation framework that accelerates the labeling by utilizing only a few user clicks instead of voxelwise annotations.
Our model outperforms existing non-sliding window interactive models on the AutoPET dataset and generalizes to the previously unseen HECKTOR dataset.
arXiv Detail & Related papers (2023-11-24T13:45:58Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - TractCloud: Registration-free tractography parcellation with a novel
local-global streamline point cloud representation [63.842881844791094]
Current tractography parcellation methods rely heavily on registration, but registration inaccuracies can affect parcellation.
We propose TractCloud, a registration-free framework that performs whole-brain tractography parcellation directly in individual subject space.
arXiv Detail & Related papers (2023-07-18T06:35:12Z) - Convolutional Monge Mapping Normalization for learning on sleep data [63.22081662149488]
We propose a new method called Convolutional Monge Mapping Normalization (CMMN)
CMMN consists in filtering the signals in order to adapt their power spectrum density (PSD) to a Wasserstein barycenter estimated on training data.
Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture.
arXiv Detail & Related papers (2023-05-30T08:24:01Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Progressive Adversarial Semantic Segmentation [11.323677925193438]
Deep convolutional neural networks can perform exceedingly well given full supervision.
The success of such fully-supervised models for various image analysis tasks is limited to the availability of massive amounts of labeled data.
We propose a novel end-to-end medical image segmentation model, namely Progressive Adrial Semantic (PASS)
arXiv Detail & Related papers (2020-05-08T22:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.