JUMP: A joint multimodal registration pipeline for neuroimaging with
minimal preprocessing
- URL: http://arxiv.org/abs/2401.14250v1
- Date: Thu, 25 Jan 2024 15:40:19 GMT
- Title: JUMP: A joint multimodal registration pipeline for neuroimaging with
minimal preprocessing
- Authors: Adria Casamitjana and Juan Eugenio Iglesias and Raul Tudela and Aida
Ninerola-Baizan and Roser Sala-Llonch
- Abstract summary: We present a pipeline for unbiased and robust registration of neuroimaging modalities with minimal pre-processing.
The pipeline currently works with structural MRI, resting state fMRI and amyloid PET images.
We show the predictive power of the derived biomarkers using in a case-control study and study the cross-modal relationship between different image modalities.
- Score: 1.3549498237473223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a pipeline for unbiased and robust multimodal registration of
neuroimaging modalities with minimal pre-processing. While typical multimodal
studies need to use multiple independent processing pipelines, with diverse
options and hyperparameters, we propose a single and structured framework to
jointly process different image modalities. The use of state-of-the-art
learning-based techniques enables fast inferences, which makes the presented
method suitable for large-scale and/or multi-cohort datasets with a diverse
number of modalities per session. The pipeline currently works with structural
MRI, resting state fMRI and amyloid PET images. We show the predictive power of
the derived biomarkers using in a case-control study and study the cross-modal
relationship between different image modalities. The code can be found in
https: //github.com/acasamitjana/JUMP.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Deep Multimodal Collaborative Learning for Polyp Re-Identification [4.4028428688691905]
Colonoscopic Polyp Re-Identification aims to match the same polyp from a large gallery with images from different views taken using different cameras.
Traditional methods for object ReID directly adopting CNN models trained on the ImageNet dataset produce unsatisfactory retrieval performance.
We propose a novel Deep Multimodal Collaborative Learning framework named DMCL for polyp re-identification.
arXiv Detail & Related papers (2024-08-12T04:05:19Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical
Representations [34.821129614819604]
We introduce MHVAE, a deep hierarchical variational auto-encoder (VAE) that synthesizes missing images from various modalities.
Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation.
Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for missing images.
arXiv Detail & Related papers (2023-09-15T20:21:03Z) - A Learnable Variational Model for Joint Multimodal MRI Reconstruction
and Synthesis [4.056490719080639]
We propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI.
The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality.
arXiv Detail & Related papers (2022-04-08T01:35:19Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - MuCAN: Multi-Correspondence Aggregation Network for Video
Super-Resolution [63.02785017714131]
Video super-resolution (VSR) aims to utilize multiple low-resolution frames to generate a high-resolution prediction for each frame.
Inter- and intra-frames are the key sources for exploiting temporal and spatial information.
We build an effective multi-correspondence aggregation network (MuCAN) for VSR.
arXiv Detail & Related papers (2020-07-23T05:41:27Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.