Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis
- URL: http://arxiv.org/abs/2008.08698v1
- Date: Wed, 19 Aug 2020 22:56:36 GMT
- Title: Self-Supervised Ultrasound to MRI Fetal Brain Image Synthesis
- Authors: Jianbo Jiao, Ana I.L. Namburete, Aris T. Papageorghiou, J. Alison
Noble
- Abstract summary: Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the developing brain but is not suitable for second-trimester anomaly screening.
In this paper we propose to generate MR-like images directly from clinical US images.
The proposed model is end-to-end trainable and self-supervised without any external annotations.
- Score: 20.53251934808636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fetal brain magnetic resonance imaging (MRI) offers exquisite images of the
developing brain but is not suitable for second-trimester anomaly screening,
for which ultrasound (US) is employed. Although expert sonographers are adept
at reading US images, MR images which closely resemble anatomical images are
much easier for non-experts to interpret. Thus in this paper we propose to
generate MR-like images directly from clinical US images. In medical image
analysis such a capability is potentially useful as well, for instance for
automatic US-MRI registration and fusion. The proposed model is end-to-end
trainable and self-supervised without any external annotations. Specifically,
based on an assumption that the US and MRI data share a similar anatomical
latent space, we first utilise a network to extract the shared latent features,
which are then used for MRI synthesis. Since paired data is unavailable for our
study (and rare in practice), pixel-level constraints are infeasible to apply.
We instead propose to enforce the distributions to be statistically
indistinguishable, by adversarial learning in both the image domain and feature
space. To regularise the anatomical structures between US and MRI during
synthesis, we further propose an adversarial structural constraint. A new
cross-modal attention technique is proposed to utilise non-local spatial
information, by encouraging multi-modal knowledge fusion and propagation. We
extend the approach to consider the case where 3D auxiliary information (e.g.,
3D neighbours and a 3D location index) from volumetric data is also available,
and show that this improves image synthesis. The proposed approach is evaluated
quantitatively and qualitatively with comparison to real fetal MR images and
other approaches to synthesis, demonstrating its feasibility of synthesising
realistic MR images.
Related papers
- Two-Stage Approach for Brain MR Image Synthesis: 2D Image Synthesis and 3D Refinement [1.5683566370372715]
It is crucial to synthesize the missing MR images that reflect the unique characteristics of the absent modality with precise tumor representation.
We propose a two-stage approach that first synthesizes MR images from 2D slices using a novel intensity encoding method and then refines the synthesized MRI.
arXiv Detail & Related papers (2024-10-14T08:21:08Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Unpaired Volumetric Harmonization of Brain MRI with Conditional Latent Diffusion [13.563413478006954]
We propose a novel 3D MRI Harmonization framework through Conditional Latent Diffusion (HCLD)
It comprises a generalizable 3D autoencoder that encodes and decodes MRIs through a 4D latent space.
HCLD learns the latent distribution and generates harmonized MRIs with anatomical information from source MRIs while conditioned on target image style.
arXiv Detail & Related papers (2024-08-18T00:13:48Z) - Synthetic Brain Images: Bridging the Gap in Brain Mapping With Generative Adversarial Model [0.0]
This work investigates the use of Deep Convolutional Generative Adversarial Networks (DCGAN) for producing high-fidelity and realistic MRI image slices.
While the discriminator network discerns between created and real slices, the generator network learns to synthesise realistic MRI image slices.
The generator refines its capacity to generate slices that closely mimic real MRI data through an adversarial training approach.
arXiv Detail & Related papers (2024-04-11T05:06:51Z) - Disentangled Latent Energy-Based Style Translation: An Image-Level Structural MRI Harmonization Framework [20.269574292365107]
We develop a novel framework for unpaired image-level MRI harmonization.
It consists of (a) site-invariant image generation ( SIG), (b) site-specific style translation (SST), and (c) site-specific MRI synthesis (SMS)
By disentangling image generation and style translation in latent space, the DLEST can achieve efficient style translation.
arXiv Detail & Related papers (2024-02-10T03:42:37Z) - Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Video4MRI: An Empirical Study on Brain Magnetic Resonance Image
Analytics with CNN-based Video Classification Frameworks [60.42012344842292]
3D CNN-based models dominate the field of magnetic resonance image (MRI) analytics.
In this paper, four datasets of Alzheimer's and Parkinson's disease recognition are utilized in experiments.
In terms of efficiency, the video framework performs better than 3D-CNN models by 5% - 11% with 50% - 66% less trainable parameters.
arXiv Detail & Related papers (2023-02-24T15:26:31Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.