TAI-GAN: Temporally and Anatomically Informed GAN for early-to-late
frame conversion in dynamic cardiac PET motion correction
- URL: http://arxiv.org/abs/2308.12443v1
- Date: Wed, 23 Aug 2023 21:51:24 GMT
- Title: TAI-GAN: Temporally and Anatomically Informed GAN for early-to-late
frame conversion in dynamic cardiac PET motion correction
- Authors: Xueqi Guo, Luyao Shi, Xiongchao Chen, Bo Zhou, Qiong Liu, Huidong Xie,
Yi-Hwa Liu, Richard Palyo, Edward J. Miller, Albert J. Sinusas, Bruce
Spottiswoode, Chi Liu, Nicha C. Dvornek
- Abstract summary: The rapid tracer kinetics of rubidium-82 ($82$Rb) raise significant challenges for inter-frame motion correction.
We propose a Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) to transform the early frames into the late reference frame.
We validated our proposed method on a clinical $82$Rb PET dataset and found that our TAI-GAN can produce converted early frames with high image quality.
- Score: 14.611502926407669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid tracer kinetics of rubidium-82 ($^{82}$Rb) and high variation of
cross-frame distribution in dynamic cardiac positron emission tomography (PET)
raise significant challenges for inter-frame motion correction, particularly
for the early frames where conventional intensity-based image registration
techniques are not applicable. Alternatively, a promising approach utilizes
generative methods to handle the tracer distribution changes to assist existing
registration methods. To improve frame-wise registration and parametric
quantification, we propose a Temporally and Anatomically Informed Generative
Adversarial Network (TAI-GAN) to transform the early frames into the late
reference frame using an all-to-one mapping. Specifically, a feature-wise
linear modulation layer encodes channel-wise parameters generated from temporal
tracer kinetics information, and rough cardiac segmentations with local shifts
serve as the anatomical information. We validated our proposed method on a
clinical $^{82}$Rb PET dataset and found that our TAI-GAN can produce converted
early frames with high image quality, comparable to the real reference frames.
After TAI-GAN conversion, motion estimation accuracy and clinical myocardial
blood flow (MBF) quantification were improved compared to using the original
frames. Our code is published at https://github.com/gxq1998/TAI-GAN.
Related papers
- Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - LaMoD: Latent Motion Diffusion Model For Myocardial Strain Generation [5.377722774297911]
We introduce a novel Latent Motion Diffusion model (LaMoD) to predict highly accurate DENSE motions from standard CMR videos.
Experimental results demonstrate that our proposed method, LaMoD, significantly improves the accuracy of motion analysis in standard CMR images.
arXiv Detail & Related papers (2024-07-02T12:54:32Z) - TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial
Network for early-to-late frame conversion in dynamic cardiac PET inter-frame
motion correction [15.380659401728735]
We propose a novel method called Temporally and Anatomically Informed Generative Adrial Network (TAI-GAN) to convert early frames into those with tracer distribution similar to the last reference frame.
Our proposed method was evaluated on a clinical 82-Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames.
arXiv Detail & Related papers (2024-02-14T20:39:07Z) - GSMorph: Gradient Surgery for cine-MRI Cardiac Deformable Registration [62.41725951450803]
Learning-based deformable registration relies on weighted objective functions trading off registration accuracy and smoothness of the field.
We construct a registration model based on the gradient surgery mechanism, named GSMorph, to achieve a hyper parameter-free balance on multiple losses.
Our method is model-agnostic and can be merged into any deep registration network without introducing extra parameters or slowing down inference.
arXiv Detail & Related papers (2023-06-26T13:32:09Z) - Unsupervised Echocardiography Registration through Patch-based MLPs and
Transformers [6.330832343516528]
This work introduces three patch-based frameworks for image registration using transformers and patches.
We demonstrate comparable and even better registration performance than a popular CNN registration model.
arXiv Detail & Related papers (2022-11-21T17:59:04Z) - Unsupervised inter-frame motion correction for whole-body dynamic PET
using convolutional long short-term memory in a convolutional neural network [9.349668170221975]
We develop an unsupervised deep learning-based framework to correct inter-frame body motion.
The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer.
Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline.
arXiv Detail & Related papers (2022-06-13T17:38:16Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - Automatic size and pose homogenization with spatial transformer network
to improve and accelerate pediatric segmentation [51.916106055115755]
We propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN)
Our architecture is composed of three sequential modules that are estimated together during training.
We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners.
arXiv Detail & Related papers (2021-07-06T14:50:03Z) - TransCamP: Graph Transformer for 6-DoF Camera Pose Estimation [77.09542018140823]
We propose a neural network approach with a graph transformer backbone, namely TransCamP, to address the camera relocalization problem.
TransCamP effectively fuses the image features, camera pose information and inter-frame relative camera motions into encoded graph attributes.
arXiv Detail & Related papers (2021-05-28T19:08:43Z) - Clinically Translatable Direct Patlak Reconstruction from Dynamic PET
with Motion Correction Using Convolutional Neural Network [9.949523630885261]
Patlak model is widely used in 18F-FDG dynamic positron emission tomography (PET) imaging.
In this work, we proposed a data-driven framework which maps the dynamic PET images to the high-quality motion-corrected direct Patlak images.
arXiv Detail & Related papers (2020-09-13T02:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.