InVA: Integrative Variational Autoencoder for Harmonization of
Multi-modal Neuroimaging Data
- URL: http://arxiv.org/abs/2402.02734v1
- Date: Mon, 5 Feb 2024 05:26:17 GMT
- Title: InVA: Integrative Variational Autoencoder for Harmonization of
Multi-modal Neuroimaging Data
- Authors: Bowen Lei, Rajarshi Guhaniyogi, Krishnendu Chandra, Aaron Scheffler,
Bani Mallick (for the Alzheimer's Disease Neuroimaging Initiative)
- Abstract summary: This article proposes a novel approach, referred to as Integrative Variational Autoencoder (textttInVA) method, which borrows information from multiple images obtained from different sources to draw predictive inference of an image.
Numerical results demonstrate substantial advantages of textttInVA over VAEs, which typically do not allow borrowing information between input images.
- Score: 3.792342522967013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a significant interest in exploring non-linear associations among
multiple images derived from diverse imaging modalities. While there is a
growing literature on image-on-image regression to delineate predictive
inference of an image based on multiple images, existing approaches have
limitations in efficiently borrowing information between multiple imaging
modalities in the prediction of an image. Building on the literature of
Variational Auto Encoders (VAEs), this article proposes a novel approach,
referred to as Integrative Variational Autoencoder (\texttt{InVA}) method,
which borrows information from multiple images obtained from different sources
to draw predictive inference of an image. The proposed approach captures
complex non-linear association between the outcome image and input images,
while allowing rapid computation. Numerical results demonstrate substantial
advantages of \texttt{InVA} over VAEs, which typically do not allow borrowing
information between input images. The proposed framework offers highly accurate
predictive inferences for costly positron emission topography (PET) from
multiple measures of cortical structure in human brain scans readily available
from magnetic resonance imaging (MRI).
Related papers
- Bridging the Gap between Synthetic and Authentic Images for Multimodal
Machine Translation [51.37092275604371]
Multimodal machine translation (MMT) simultaneously takes the source sentence and a relevant image as input for translation.
Recent studies suggest utilizing powerful text-to-image generation models to provide image inputs.
However, synthetic images generated by these models often follow different distributions compared to authentic images.
arXiv Detail & Related papers (2023-10-20T09:06:30Z) - Deep Unfolding Convolutional Dictionary Model for Multi-Contrast MRI
Super-resolution and Reconstruction [23.779641808300596]
We propose a multi-contrast convolutional dictionary (MC-CDic) model under the guidance of the optimization algorithm.
We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a deep CDic model.
Experimental results demonstrate the superior performance of the proposed MC-CDic model against existing SOTA methods.
arXiv Detail & Related papers (2023-09-03T13:18:59Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Paired Image-to-Image Translation Quality Assessment Using Multi-Method
Fusion [0.0]
This paper proposes a novel approach that combines signals of image quality between paired source and transformation to predict the latter's similarity with a hypothetical ground truth.
We trained a Multi-Method Fusion (MMF) model via an ensemble of gradient-boosted regressors to predict Deep Image Structure and Texture Similarity (DISTS)
Analysis revealed the task to be feature-constrained, introducing a trade-off at inference between metric time and prediction accuracy.
arXiv Detail & Related papers (2022-05-09T11:05:15Z) - Variational Inference for Quantifying Inter-observer Variability in
Segmentation of Anatomical Structures [12.138198227748353]
Most segmentation methods simply model a mapping from an image to its single segmentation map and do not take the disagreement of annotators into consideration.
We propose a novel variational inference framework to model the distribution of plausible segmentation maps, given a specific MR image.
arXiv Detail & Related papers (2022-01-18T16:33:33Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Flow-based Deformation Guidance for Unpaired Multi-Contrast MRI
Image-to-Image Translation [7.8333615755210175]
In this paper, we introduce a novel approach to unpaired image-to-image translation based on the invertible architecture.
We utilize the temporal information between consecutive slices to provide more constraints to the optimization for transforming one domain to another in unpaired medical images.
arXiv Detail & Related papers (2020-12-03T09:10:22Z) - CoMIR: Contrastive Multimodal Image Representation for Registration [4.543268895439618]
We propose contrastive coding to learn shared, dense image representations, referred to as CoMIRs (Contrastive Multimodal Image Representations)
CoMIRs enable the registration of multimodal images where existing registration methods often fail due to a lack of sufficiently similar image structures.
arXiv Detail & Related papers (2020-06-11T10:51:33Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.