BrainVoxGen: Deep learning framework for synthesis of Ultrasound to MRI
- URL: http://arxiv.org/abs/2310.08608v2
- Date: Wed, 17 Jul 2024 18:52:47 GMT
- Title: BrainVoxGen: Deep learning framework for synthesis of Ultrasound to MRI
- Authors: Shubham Singh, Mrunal Bewoor, Ammar Ranapurwala, Satyam Rai, Sheetal Patil,
- Abstract summary: The work proposes a novel deep-learning framework for the synthesis of three-dimensional MRI volumes from corresponding 3D ultrasound images of the brain.
This research holds promise for transformative applications in medical diagnostics and treatment planning within the neuroimaging domain.
- Score: 2.982610402087728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The work proposes a novel deep-learning framework for the synthesis of three-dimensional MRI volumes from corresponding 3D ultrasound images of the brain, leveraging a modified iteration of the Pix2Pix Generative Adversarial Network (GAN) model. Addressing the formidable challenge of bridging the modality disparity between ultrasound and MRI, this research holds promise for transformative applications in medical diagnostics and treatment planning within the neuroimaging domain. While the findings reveal a discernible degree of similarity between the synthesized MRI volumes and anticipated outcomes, they fall short of practical deployment standards, primarily due to constraints associated with dataset scale and computational resources. The methodology yields MRI volumes with a satisfactory similarity score, establishing a foundational benchmark for subsequent investigations.
Related papers
- Two-Stage Approach for Brain MR Image Synthesis: 2D Image Synthesis and 3D Refinement [1.5683566370372715]
It is crucial to synthesize the missing MR images that reflect the unique characteristics of the absent modality with precise tumor representation.
We propose a two-stage approach that first synthesizes MR images from 2D slices using a novel intensity encoding method and then refines the synthesized MRI.
arXiv Detail & Related papers (2024-10-14T08:21:08Z) - Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Unpaired Volumetric Harmonization of Brain MRI with Conditional Latent Diffusion [13.563413478006954]
We propose a novel 3D MRI Harmonization framework through Conditional Latent Diffusion (HCLD)
It comprises a generalizable 3D autoencoder that encodes and decodes MRIs through a 4D latent space.
HCLD learns the latent distribution and generates harmonized MRIs with anatomical information from source MRIs while conditioned on target image style.
arXiv Detail & Related papers (2024-08-18T00:13:48Z) - A Unified Framework for Synthesizing Multisequence Brain MRI via Hybrid Fusion [4.47838172826189]
We propose a novel unified framework for synthesizing multisequence MR images, called Hybrid Fusion GAN (HF-GAN)
We introduce a hybrid fusion encoder designed to ensure the disentangled extraction of complementary and modality-specific information.
Common feature representations are transformed into a target latent space via the modality infuser to synthesize missing MR sequences.
arXiv Detail & Related papers (2024-06-21T08:06:00Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - Data and Physics Driven Learning Models for Fast MRI -- Fundamentals and
Methodologies from CNN, GAN to Attention and Transformers [72.047680167969]
This article aims to introduce the deep learning based data driven techniques for fast MRI including convolutional neural network and generative adversarial network based methods.
We will detail the research in coupling physics and data driven models for MRI acceleration.
Finally, we will demonstrate through a few clinical applications, explain the importance of data harmonisation and explainable models for such fast MRI techniques in multicentre and multi-scanner studies.
arXiv Detail & Related papers (2022-04-01T22:48:08Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Neural Architecture Search for Gliomas Segmentation on Multimodal
Magnetic Resonance Imaging [2.66512000865131]
We propose a neural architecture search (NAS) based solution to brain tumor segmentation tasks on multimodal MRI scans.
The developed solution also integrates normalization and patching strategies tailored for brain MRI processing.
arXiv Detail & Related papers (2020-05-13T14:32:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.