Diffeomorphic Transformer-based Abdomen MRI-CT Deformable Image Registration
- URL: http://arxiv.org/abs/2405.02692v1
- Date: Sat, 4 May 2024 15:04:06 GMT
- Title: Diffeomorphic Transformer-based Abdomen MRI-CT Deformable Image Registration
- Authors: Yang Lei, Luke A. Matkovic, Justin Roper, Tonghe Wang, Jun Zhou, Beth Ghavidel, Mark McDonald, Pretesh Patel, Xiaofeng Yang,
- Abstract summary: This paper aims to create a deep learning framework that can estimate the deformation vector field (DVF) for directly registering abdominal MRI-CT images.
The model integrated Swin transformers, which have demonstrated superior performance in motion tracking, into the convolutional neural network (CNN)
- Score: 5.204992327758393
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper aims to create a deep learning framework that can estimate the deformation vector field (DVF) for directly registering abdominal MRI-CT images. The proposed method assumed a diffeomorphic deformation. By using topology-preserved deformation features extracted from the probabilistic diffeomorphic registration model, abdominal motion can be accurately obtained and utilized for DVF estimation. The model integrated Swin transformers, which have demonstrated superior performance in motion tracking, into the convolutional neural network (CNN) for deformation feature extraction. The model was optimized using a cross-modality image similarity loss and a surface matching loss. To compute the image loss, a modality-independent neighborhood descriptor (MIND) was used between the deformed MRI and CT images. The surface matching loss was determined by measuring the distance between the warped coordinates of the surfaces of contoured structures on the MRI and CT images. The deformed MRI image was assessed against the CT image using the target registration error (TRE), Dice similarity coefficient (DSC), and mean surface distance (MSD) between the deformed contours of the MRI image and manual contours of the CT image. When compared to only rigid registration, DIR with the proposed method resulted in an increase of the mean DSC values of the liver and portal vein from 0.850 and 0.628 to 0.903 and 0.763, a decrease of the mean MSD of the liver from 7.216 mm to 3.232 mm, and a decrease of the TRE from 26.238 mm to 8.492 mm. The proposed deformable image registration method based on a diffeomorphic transformer provides an effective and efficient way to generate an accurate DVF from an MRI-CT image pair of the abdomen. It could be utilized in the current treatment planning workflow for liver radiotherapy.
Related papers
- Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer [6.361348748202733]
Voxel-based analysis (VBA) for population level radiotherapy outcomes modeling requires preserving inter-patient deformable image registration (DIR)
We developed a tumor-aware recurrent registration (TRACER) deep learning (DL) method and evaluated its suitability for VBA.
arXiv Detail & Related papers (2024-09-18T12:11:59Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Synthetic CT Generation from MRI using 3D Transformer-based Denoising
Diffusion Model [2.232713445482175]
Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning.
We propose an MRI-to-CT transformer-based denoising diffusion probabilistic model (MC-DDPM) to transform MRI into high-quality sCT.
arXiv Detail & Related papers (2023-05-31T00:32:00Z) - DDMM-Synth: A Denoising Diffusion Model for Cross-modal Medical Image
Synthesis with Sparse-view Measurement Embedding [7.6849475214826315]
We propose a novel framework called DDMM- Synth for medical image synthesis.
It combines an MRI-guided diffusion model with a new CT measurement embedding reverse sampling scheme.
It can adjust the projection number of CT a posteriori for a particular clinical application and its modified version can even improve the results significantly for noisy cases.
arXiv Detail & Related papers (2023-03-28T07:13:11Z) - Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating
Neural Field [37.86878619100209]
NeRF has widely received attention in Sparse-View (SV) CT reconstruction problems as a self-supervised deep learning framework.
Existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition.
This work proposes a self-calibrating neural field that recovers the artifacts-free image from the rigid motion-corrupted SV measurement.
arXiv Detail & Related papers (2022-10-23T13:55:07Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Symmetry-Enhanced Attention Network for Acute Ischemic Infarct
Segmentation with Non-Contrast CT Images [50.55978219682419]
We propose a symmetry enhanced attention network (SEAN) for acute ischemic infarct segmentation.
Our proposed network automatically transforms an input CT image into the standard space where the brain tissue is bilaterally symmetric.
The proposed SEAN outperforms some symmetry-based state-of-the-art methods in terms of both dice coefficient and infarct localization.
arXiv Detail & Related papers (2021-10-11T07:13:26Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations [0.6285581681015912]
Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
arXiv Detail & Related papers (2020-11-09T11:27:45Z) - A Learning-based Method for Online Adjustment of C-arm Cone-Beam CT
Source Trajectories for Artifact Avoidance [47.345403652324514]
The reconstruction quality attainable with commercial CBCT devices is insufficient due to metal artifacts in the presence of pedicle screws.
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task.
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory.
arXiv Detail & Related papers (2020-08-14T09:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.