Multimodal Deformable Image Registration for Long-COVID Analysis Based on Progressive Alignment and Multi-perspective Loss
- URL: http://arxiv.org/abs/2406.15172v1
- Date: Fri, 21 Jun 2024 14:19:18 GMT
- Title: Multimodal Deformable Image Registration for Long-COVID Analysis Based on Progressive Alignment and Multi-perspective Loss
- Authors: Jiahua Li, James T. Grist, Fergus V. Gleeson, Bartłomiej W. Papież,
- Abstract summary: Long COVID is characterized by persistent symptoms, particularly pulmonary impairment.
Integrating functional data from XeMRI with structural data from CT is crucial for comprehensive analysis and effective treatment strategies.
We propose an end-to-end multimodal deformable image registration method that achieves superior performance for aligning long-COVID lung CT and proton density MRI data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Long COVID is characterized by persistent symptoms, particularly pulmonary impairment, which necessitates advanced imaging for accurate diagnosis. Hyperpolarised Xenon-129 MRI (XeMRI) offers a promising avenue by visualising lung ventilation, perfusion, as well as gas transfer. Integrating functional data from XeMRI with structural data from Computed Tomography (CT) is crucial for comprehensive analysis and effective treatment strategies in long COVID, requiring precise data alignment from those complementary imaging modalities. To this end, CT-MRI registration is an essential intermediate step, given the significant challenges posed by the direct alignment of CT and Xe-MRI. Therefore, we proposed an end-to-end multimodal deformable image registration method that achieves superior performance for aligning long-COVID lung CT and proton density MRI (pMRI) data. Moreover, our method incorporates a novel Multi-perspective Loss (MPL) function, enhancing state-of-the-art deep learning methods for monomodal registration by making them adaptable for multimodal tasks. The registration results achieve a Dice coefficient score of 0.913, indicating a substantial improvement over the state-of-the-art multimodal image registration techniques. Since the XeMRI and pMRI images are acquired in the same sessions and can be roughly aligned, our results facilitate subsequent registration between XeMRI and CT, thereby potentially enhancing clinical decision-making for long COVID management.
Related papers
- BrainMVP: Multi-modal Vision Pre-training for Brain Image Analysis using Multi-parametric MRI [11.569448567735435]
BrainMVP is a multi-modal vision pre-training framework for brain image analysis using multi-parametric MRI scans.
Cross-modal reconstruction is explored to learn distinctive brain image embeddings and efficient modality fusion capabilities.
Experiments on downstream tasks demonstrate superior performance compared to state-of-the-art pre-training methods in the medical domain.
arXiv Detail & Related papers (2024-10-14T15:12:16Z) - A Diffusion-based Xray2MRI Model: Generating Pseudo-MRI Volumes From one Single X-ray [6.014316825270666]
We introduce a novel diffusion-based Xray2MRI model capable of generating pseudo-MRI volumes from a single X-ray image.
Experimental results demonstrate that our proposed approach is capable of generating pseudo-MRI sequences that approximate real MRI scans.
arXiv Detail & Related papers (2024-10-09T15:44:34Z) - Weakly supervised alignment and registration of MR-CT for cervical cancer radiotherapy [9.060365057476133]
Cervical cancer is one of the leading causes of death in women.
We propose a preliminary spatial alignment algorithm and a weakly supervised multimodal registration network.
arXiv Detail & Related papers (2024-05-21T15:05:51Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Enhanced Synthetic MRI Generation from CT Scans Using CycleGAN with
Feature Extraction [3.2088888904556123]
We propose an approach for enhanced monomodal registration using synthetic MRI images from CT scans.
Our methodology shows promising results, outperforming several state-of-the-art methods.
arXiv Detail & Related papers (2023-10-31T16:39:56Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - ShuffleUNet: Super resolution of diffusion-weighted MRIs using deep
learning [47.68307909984442]
Single Image Super-Resolution (SISR) is a technique aimed to obtain high-resolution (HR) details from one single low-resolution input image.
Deep learning extracts prior knowledge from big datasets and produces superior MRI images from the low-resolution counterparts.
arXiv Detail & Related papers (2021-02-25T14:52:23Z) - Patch-based field-of-view matching in multi-modal images for
electroporation-based ablations [0.6285581681015912]
Multi-modal imaging sensors are currently involved at different steps of an interventional therapeutic work-flow.
Merging this information relies on a correct spatial alignment of the observed anatomy between the acquired images.
We show that a regional registration approach using voxel patches provides a good structural compromise between the voxel-wise and "global shifts" approaches.
arXiv Detail & Related papers (2020-11-09T11:27:45Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Cardiac Segmentation on Late Gadolinium Enhancement MRI: A Benchmark
Study from Multi-Sequence Cardiac MR Segmentation Challenge [43.01944884184009]
This paper presents the selective results from the Multi-Sequence MR (MS-CMR) challenge, in conjunction with MII 2019.
It was aimed to develop new algorithms, as well as benchmark existing ones for LGE CMR segmentation and compare them objectively.
The success of these methods was mainly attributed to the inclusion of auxiliary sequences from the MS-CMR images.
arXiv Detail & Related papers (2020-06-22T17:04:38Z) - Multifold Acceleration of Diffusion MRI via Slice-Interleaved Diffusion
Encoding (SIDE) [50.65891535040752]
We propose a diffusion encoding scheme, called Slice-Interleaved Diffusion.
SIDE, that interleaves each diffusion-weighted (DW) image volume with slices encoded with different diffusion gradients.
We also present a method based on deep learning for effective reconstruction of DW images from the highly slice-undersampled data.
arXiv Detail & Related papers (2020-02-25T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.