CAVM: Conditional Autoregressive Vision Model for Contrast-Enhanced Brain Tumor MRI Synthesis
- URL: http://arxiv.org/abs/2406.16074v1
- Date: Sun, 23 Jun 2024 10:50:22 GMT
- Title: CAVM: Conditional Autoregressive Vision Model for Contrast-Enhanced Brain Tumor MRI Synthesis
- Authors: Lujun Gui, Chuyang Ye, Tianyi Yan,
- Abstract summary: Conditional Autoregressive Vision Model improves synthesis of contrast-enhanced brain tumor MRI.
Deep learning methods have been applied to synthesize virtual contrast-enhanced MRI scans from non-contrast images.
Inspired by the resemblance between the gradual dose increase and the Chain-of-Thought approach in natural language processing, CAVM uses an autoregressive strategy.
- Score: 3.3966430276631208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrast-enhanced magnetic resonance imaging (MRI) is pivotal in the pipeline of brain tumor segmentation and analysis. Gadolinium-based contrast agents, as the most commonly used contrast agents, are expensive and may have potential side effects, and it is desired to obtain contrast-enhanced brain tumor MRI scans without the actual use of contrast agents. Deep learning methods have been applied to synthesize virtual contrast-enhanced MRI scans from non-contrast images. However, as this synthesis problem is inherently ill-posed, these methods fall short in producing high-quality results. In this work, we propose Conditional Autoregressive Vision Model (CAVM) for improving the synthesis of contrast-enhanced brain tumor MRI. As the enhancement of image intensity grows with a higher dose of contrast agents, we assume that it is less challenging to synthesize a virtual image with a lower dose, where the difference between the contrast-enhanced and non-contrast images is smaller. Thus, CAVM gradually increases the contrast agent dosage and produces higher-dose images based on previous lower-dose ones until the final desired dose is achieved. Inspired by the resemblance between the gradual dose increase and the Chain-of-Thought approach in natural language processing, CAVM uses an autoregressive strategy with a decomposition tokenizer and a decoder. Specifically, the tokenizer is applied to obtain a more compact image representation for computational efficiency, and it decomposes the image into dose-variant and dose-invariant tokens. Then, a masked self-attention mechanism is developed for autoregression that gradually increases the dose of the virtual image based on the dose-variant tokens. Finally, the updated dose-variant tokens corresponding to the desired dose are decoded together with dose-invariant tokens to produce the final contrast-enhanced MRI.
Related papers
- A Time-Intensity Aware Pipeline for Generating Late-Stage Breast DCE-MRI using Generative Adversarial Models [0.3499870393443268]
A novel loss function that leverages the biological behavior of contrast agent (CA) in tissue is proposed to optimize a pixel-attention based generative model.
Unlike traditional normalization and standardization methods, we developed a new normalization strategy that maintains the contrast enhancement pattern across the image sequences at multiple timestamps.
arXiv Detail & Related papers (2024-09-03T04:31:49Z) - Towards Learning Contrast Kinetics with Multi-Condition Latent Diffusion Models [2.8981737432963506]
We propose a latent diffusion model capable of acquisition time-conditioned image synthesis of DCE-MRI temporal sequences.
Our results demonstrate our method's ability to generate realistic multi-sequence fat-saturated breast DCE-MRI.
arXiv Detail & Related papers (2024-03-20T18:01:57Z) - Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - A Deep Learning Approach for Virtual Contrast Enhancement in Contrast
Enhanced Spectral Mammography [1.1129469448121927]
This work proposes to use deep generative models for virtual contrast enhancement on Contrast Enhanced Spectral Mammography.
Our deep networks, consisting of an autoencoder and two Generative Adversarial Networks, the Pix2Pix, and the CycleGAN, generate synthetic recombined images solely from low-energy images.
arXiv Detail & Related papers (2023-08-01T11:49:05Z) - Simulation of Arbitrary Level Contrast Dose in MRI Using an Iterative
Global Transformer Model [0.7269343652807762]
Deep learning (DL) based contrast dose reduction and elimination in MRI imaging is gaining traction.
These algorithms are however limited by the availability of high quality low dose datasets.
In this work, we formulate a novel transformer (Gformer) based iterative modelling approach for the synthesis of images with arbitrary contrast enhancement.
arXiv Detail & Related papers (2023-07-22T04:44:57Z) - JoJoNet: Joint-contrast and Joint-sampling-and-reconstruction Network
for Multi-contrast MRI [49.29851365978476]
The proposed framework consists of a sampling mask generator for each image contrast and a reconstructor exploiting the inter-contrast correlations with a recurrent structure.
The acceleration ratio of each image contrast is also learnable and can be driven by a downstream task performance.
arXiv Detail & Related papers (2022-10-22T20:46:56Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Transformer-empowered Multi-scale Contextual Matching and Aggregation
for Multi-contrast MRI Super-resolution [55.52779466954026]
Multi-contrast super-resolution (SR) reconstruction is promising to yield SR images with higher quality.
Existing methods lack effective mechanisms to match and fuse these features for better reconstruction.
We propose a novel network to address these problems by developing a set of innovative Transformer-empowered multi-scale contextual matching and aggregation techniques.
arXiv Detail & Related papers (2022-03-26T01:42:59Z) - CyTran: A Cycle-Consistent Transformer with Multi-Level Consistency for
Non-Contrast to Contrast CT Translation [56.622832383316215]
We propose a novel approach to translate unpaired contrast computed tomography (CT) scans to non-contrast CT scans.
Our approach is based on cycle-consistent generative adversarial convolutional transformers, for short, CyTran.
Our empirical results show that CyTran outperforms all competing methods.
arXiv Detail & Related papers (2021-10-12T23:25:03Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.