FlowNet-PET: Unsupervised Learning to Perform Respiratory Motion
Correction in PET Imaging
- URL: http://arxiv.org/abs/2205.14147v1
- Date: Fri, 27 May 2022 18:18:19 GMT
- Title: FlowNet-PET: Unsupervised Learning to Perform Respiratory Motion
Correction in PET Imaging
- Authors: Teaghan O'Briain, Carlos Uribe, Kwang Moo Yi, Jonas Teuwen, Ioannis
Sechopoulos, and Magdalena Bazalova-Carter
- Abstract summary: FlowNet-PET is an interpretable and unsupervised deep learning technique to correct for breathing motion in PET imaging.
As a proof-of-concept, FlowNet-PET was applied to anthropomorphic digital phantom data.
- Score: 11.451728125088113
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: To correct for breathing motion in PET imaging, an interpretable and
unsupervised deep learning technique, FlowNet-PET, was constructed. The network
was trained to predict the optical flow between two PET frames from different
breathing amplitude ranges. As a result, the trained model groups different
retrospectively-gated PET images together into a motion-corrected single bin,
providing a final image with similar counting statistics as a non-gated image,
but without the blurring effects that were initially observed. As a
proof-of-concept, FlowNet-PET was applied to anthropomorphic digital phantom
data, which provided the possibility to design robust metrics to quantify the
corrections. When comparing the predicted optical flows to the ground truths,
the median absolute error was found to be smaller than the pixel and slice
widths, even for the phantom with a diaphragm movement of 21 mm. The
improvements were illustrated by comparing against images without motion and
computing the intersection over union (IoU) of the tumors as well as the
enclosed activity and coefficient of variation (CoV) within the no-motion tumor
volume before and after the corrections were applied. The average relative
improvements provided by the network were 54%, 90%, and 76% for the IoU, total
activity, and CoV, respectively. The results were then compared against the
conventional retrospective phase binning approach. FlowNet-PET achieved similar
results as retrospective binning, but only required one sixth of the scan
duration. The code and data used for training and analysis has been made
publicly available (https://github.com/teaghan/FlowNet_PET).
Related papers
- From FDG to PSMA: A Hitchhiker's Guide to Multitracer, Multicenter Lesion Segmentation in PET/CT Imaging [0.9384264274298444]
We present our solution for the autoPET III challenge, targeting multitracer, multicenter generalization using the nnU-Net framework with the ResEncL architecture.
Key techniques include misalignment data augmentation and multi-modal pretraining across CT, MR, and PET datasets.
Compared to the default nnU-Net, which achieved a Dice score of 57.61, our model significantly improved performance with a Dice score of 68.40, alongside a reduction in false positive (FPvol: 7.82) and false negative (FNvol: 10.35) volumes.
arXiv Detail & Related papers (2024-09-14T16:39:17Z) - Two-Phase Multi-Dose-Level PET Image Reconstruction with Dose Level Awareness [43.45142393436787]
We design a novel two-phase multi-dose-level PET reconstruction algorithm with dose level awareness.
The pre-training phase is devised to explore both fine-grained discriminative features and effective semantic representation.
The SPET prediction phase adopts a coarse prediction network utilizing pre-learned dose level prior to generate preliminary result.
arXiv Detail & Related papers (2024-04-02T01:57:08Z) - PET Tracer Conversion among Brain PET via Variable Augmented Invertible
Network [8.895830601854534]
A tracer conversion invertible neural network (TC-INN) for image projection is developed to map FDG images to DOPA images through deep learning.
Experimental results exhibited excellent generation capability in mapping between FDG and DOPA, suggesting that PET tracer conversion has great potential in the case of limited tracer applications.
arXiv Detail & Related papers (2023-11-01T12:04:33Z) - Score-Based Generative Models for PET Image Reconstruction [38.72868748574543]
We propose several PET-specific adaptations of score-based generative models.
The proposed framework is developed for both 2D and 3D PET.
In addition, we provide an extension to guided reconstruction using magnetic resonance images.
arXiv Detail & Related papers (2023-08-27T19:43:43Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - DopUS-Net: Quality-Aware Robotic Ultrasound Imaging based on Doppler
Signal [48.97719097435527]
DopUS-Net combines the Doppler images with B-mode images to increase the segmentation accuracy and robustness of small blood vessels.
An artery re-identification module qualitatively evaluate the real-time segmentation results and automatically optimize the probe pose for enhanced Doppler images.
arXiv Detail & Related papers (2023-05-15T18:19:29Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - Direct Reconstruction of Linear Parametric Images from Dynamic PET Using
Nonlocal Deep Image Prior [13.747210115485487]
Direct reconstruction methods have been developed to estimate parametric images directly from the measured PET sinograms.
Due to limited counts received, signal-to-noise-ratio (SNR) and resolution of parametric images produced by direct reconstruction frameworks are still limited.
Recently supervised deep learning methods have been successfully applied to medical imaging denoising/reconstruction when large number of high-quality training labels are available.
arXiv Detail & Related papers (2021-06-18T21:30:22Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.