DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal
MRI Synthesis Network
- URL: http://arxiv.org/abs/2203.06920v1
- Date: Mon, 14 Mar 2022 08:22:15 GMT
- Title: DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal
MRI Synthesis Network
- Authors: Ziqi Huang, Li Lin, Pujin Cheng, Kai Pan, Xiaoying Tang
- Abstract summary: We propose a Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis network (DS3-Net)
With only 5% paired data, DS3-Net achieves competitive performance with state-of-theart image translation methods utilizing 100% paired data, delivering an average SSIM of 0.8947 and an average PSNR of 23.60.
- Score: 3.9562534927482704
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Contrast-enhanced T1 (T1ce) is one of the most essential magnetic resonance
imaging (MRI) modalities for diagnosing and analyzing brain tumors, especially
gliomas. In clinical practice, common MRI modalities such as T1, T2, and fluid
attenuation inversion recovery are relatively easy to access while T1ce is more
challenging considering the additional cost and potential risk of allergies to
the contrast agent. Therefore, it is of great clinical necessity to develop a
method to synthesize T1ce from other common modalities. Current paired image
translation methods typically have the issue of requiring a large amount of
paired data and do not focus on specific regions of interest, e.g., the tumor
region, in the synthesization process. To address these issues, we propose a
Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis
network (DS3-Net), involving both paired and unpaired data together with
dual-level knowledge distillation. DS3-Net predicts a difficulty map to
progressively promote the synthesis task. Specifically, a pixelwise constraint
and a patchwise contrastive constraint are guided by the predicted difficulty
map. Through extensive experiments on the publiclyavailable BraTS2020 dataset,
DS3-Net outperforms its supervised counterpart in each respect. Furthermore,
with only 5% paired data, the proposed DS3-Net achieves competitive performance
with state-of-theart image translation methods utilizing 100% paired data,
delivering an average SSIM of 0.8947 and an average PSNR of 23.60.
Related papers
- SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - BTDNet: a Multi-Modal Approach for Brain Tumor Radiogenomic
Classification [14.547418131610188]
This paper proposes a novel multi-modal approach, BTDNet, to predict MGMT promoter methylation status.
The proposed method outperforms by large margins the state-of-the-art methods in the RSNA-ASNR-MICCAI BraTS 2021 Challenge.
arXiv Detail & Related papers (2023-10-05T11:56:06Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - Multi-scale Transformer Network with Edge-aware Pre-training for
Cross-Modality MR Image Synthesis [52.41439725865149]
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones.
Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model.
We propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis.
arXiv Detail & Related papers (2022-12-02T11:40:40Z) - Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging [62.346649719614]
preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
arXiv Detail & Related papers (2022-09-13T07:10:14Z) - Multi-modal Brain Tumor Segmentation via Missing Modality Synthesis and
Modality-level Attention Fusion [3.9562534927482704]
We propose an end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net)
Our proposed MAF-Net is found to yield superior T1ce synthesis performance and accurate brain tumor segmentation.
arXiv Detail & Related papers (2022-03-09T09:08:48Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Unpaired cross-modality educed distillation (CMEDL) applied to CT lung
tumor segmentation [4.409836695738518]
We develop a new crossmodality educed distillation (CMEDL) approach, using unpaired CT and MRI scans.
Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks.
arXiv Detail & Related papers (2021-07-16T15:58:15Z) - Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes
from clinical MRI exams with scans of different orientation, resolution and
contrast [4.987889348212769]
We present SynthSR, a method to train a CNN that receives one or more thick-slice scans with different contrast, resolution and orientation.
The presented method does not require any preprocessing, e.g., stripping or bias field correction.
arXiv Detail & Related papers (2020-12-24T17:29:53Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Deep Learning Estimation of Multi-Tissue Constrained Spherical
Deconvolution with Limited Single Shell DW-MRI [2.903217519429591]
Deep learning can be used to estimate the information content captured by 8th order constrained spherical deconvolution (CSD)
We examine two network architectures: Sequential network of fully connected dense layers with a residual block in the middle (ResDNN), and Patch based convolutional neural network with a residual block (ResCNN)
The fiber orientation distribution function (fODF) can be recovered with high correlation as compared to the ground truth of MT-CST, which was derived from the multi-shell DW-MRI acquisitions.
arXiv Detail & Related papers (2020-02-20T15:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.