Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas
- URL: http://arxiv.org/abs/2008.02859v1
- Date: Thu, 6 Aug 2020 20:20:22 GMT
- Title: Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas
- Authors: Pengfei Guo, Puyang Wang, Rajeev Yasarla, Jinyuan Zhou, Vishal M.
Patel, and Shanshan Jiang
- Abstract summary: Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
- Score: 65.64363834322333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven automatic approaches have demonstrated their great potential in
resolving various clinical diagnostic dilemmas in neuro-oncology, especially
with the help of standard anatomic and advanced molecular MR images. However,
data quantity and quality remain a key determinant of, and a significant limit
on, the potential of such applications. In our previous work, we explored
synthesis of anatomic and molecular MR image network (SAMR) in patients with
post-treatment malignant glioms. Now, we extend it and propose Confidence
Guided SAMR (CG-SAMR) that synthesizes data from lesion information to
multi-modal anatomic sequences, including T1-weighted (T1w), gadolinium
enhanced T1w (Gd-T1w), T2-weighted (T2w), and fluid-attenuated inversion
recovery (FLAIR), and the molecular amide proton transfer-weighted (APTw)
sequence. We introduce a module which guides the synthesis based on confidence
measure about the intermediate results. Furthermore, we extend the proposed
architecture for unsupervised synthesis so that unpaired data can be used for
training the network. Extensive experiments on real clinical data demonstrate
that the proposed model can perform better than the state-of-theart synthesis
methods.
Related papers
- TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs [49.69047720285225]
We propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures.
We empirically validate emphTopoTxR using the VICTRE phantom breast dataset.
Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-na"ive imaging.
arXiv Detail & Related papers (2024-11-05T19:35:10Z) - Pre- to Post-Contrast Breast MRI Synthesis for Enhanced Tumour Segmentation [0.9722528000969453]
This study explores the feasibility of producing synthetic contrast enhancements by translating pre-contrast T1-weighted fat-saturated breast MRI to their corresponding first DCE-MRI sequence using a generative adversarial network (GAN)
We assess the generated DCE-MRI data using quantitative image quality metrics and apply them to the downstream task of 3D breast tumour segmentation.
Our results highlight the potential of post-contrast DCE-MRI synthesis in enhancing the robustness of breast tumour segmentation models via data augmentation.
arXiv Detail & Related papers (2023-11-17T21:48:41Z) - Cross-Modal Synthesis of Structural MRI and Functional Connectivity
Networks via Conditional ViT-GANs [0.8778841570220198]
Cross-modal synthesis between structural magnetic resonance imaging (sMRI) and functional network connectivity (FNC) is relatively unexplored in medical imaging.
This study employs conditional Vision Transformer Geneversarative Adrial Networks (cViT-GANs) to generate FNC data based on sMRI inputs.
arXiv Detail & Related papers (2023-09-15T05:03:08Z) - Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction [54.19448988321891]
We propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions.
We employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis.
We prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing.
arXiv Detail & Related papers (2023-05-04T12:20:51Z) - Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information
Complementary to Pre-acquired T1w MRI [52.656075914042155]
We propose an iterative framework to optimize the under-sampling pattern for MRI acquisition of another modality.
We have demonstrated superior performance of our learned under-sampling patterns on a public dataset.
arXiv Detail & Related papers (2021-11-11T04:04:48Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - A Unified Conditional Disentanglement Framework for Multimodal Brain MR
Image Translation [11.26646475512469]
We propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality.
We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database.
arXiv Detail & Related papers (2021-01-14T03:14:24Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Multi-Modality Generative Adversarial Networks with Tumor Consistency
Loss for Brain MR Image Synthesis [30.64847799586407]
We propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously.
The experimental results show that the quality of the synthesized images is better than the one synthesized by the baseline model, pix2pix.
arXiv Detail & Related papers (2020-05-02T21:33:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.