United adversarial learning for liver tumor segmentation and detection
of multi-modality non-contrast MRI
- URL: http://arxiv.org/abs/2201.02629v1
- Date: Fri, 7 Jan 2022 18:54:07 GMT
- Title: United adversarial learning for liver tumor segmentation and detection
of multi-modality non-contrast MRI
- Authors: Jianfeng Zhao, Dengwang Li, and Shuo Li
- Abstract summary: We propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI.
The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection.
The proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator.
- Score: 5.857654010519764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simultaneous segmentation and detection of liver tumors (hemangioma and
hepatocellular carcinoma (HCC)) by using multi-modality non-contrast magnetic
resonance imaging (NCMRI) are crucial for the clinical diagnosis. However, it
is still a challenging task due to: (1) the HCC information on NCMRI is
invisible or insufficient makes extraction of liver tumors feature difficult;
(2) diverse imaging characteristics in multi-modality NCMRI causes feature
fusion and selection difficult; (3) no specific information between hemangioma
and HCC on NCMRI cause liver tumors detection difficult. In this study, we
propose a united adversarial learning framework (UAL) for simultaneous liver
tumors segmentation and detection using multi-modality NCMRI. The UAL first
utilizes a multi-view aware encoder to extract multi-modality NCMRI information
for liver tumor segmentation and detection. In this encoder, a novel edge
dissimilarity feature pyramid module is designed to facilitate the
complementary multi-modality feature extraction. Second, the newly designed
fusion and selection channel is used to fuse the multi-modality feature and
make the decision of the feature selection. Then, the proposed mechanism of
coordinate sharing with padding integrates the multi-task of segmentation and
detection so that it enables multi-task to perform united adversarial learning
in one discriminator. Lastly, an innovative multi-phase radiomics guided
discriminator exploits the clear and specific tumor information to improve the
multi-task performance via the adversarial learning strategy. The UAL is
validated in corresponding multi-modality NCMRI (i.e. T1FS pre-contrast MRI,
T2FS MRI, and DWI) and three phases contrast-enhanced MRI of 255 clinical
subjects. The experiments show that UAL has great potential in the clinical
diagnosis of liver tumors.
Related papers
- Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Anisotropic Hybrid Networks for liver tumor segmentation with
uncertainty quantification [0.5799785223420274]
The burden of liver tumors is important, ranking as the fourth leading cause of cancer mortality.
The delineation of liver and tumor on contrast-enhanced magnetic resonance imaging (CE-MRI) is performed to guide the treatment strategy.
Challenges arise from the lack of available training data, as well as the high variability in terms of image resolution and MRI sequence.
arXiv Detail & Related papers (2023-08-23T07:30:16Z) - Towards multi-modal anatomical landmark detection for ultrasound-guided
brain tumor resection with contrastive learning [3.491999371287298]
Homologous anatomical landmarks between medical scans are instrumental in quantitative assessment of image registration quality.
We propose a novel contrastive learning framework to detect corresponding landmarks between MRI and intra-operative US scans in neurosurgery.
arXiv Detail & Related papers (2023-07-26T21:55:40Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - SpineOne: A One-Stage Detection Framework for Degenerative Discs and
Vertebrae [54.751251046196494]
We propose a one-stage detection framework termed SpineOne to simultaneously localize and classify degenerative discs and vertebrae from MRI slices.
SpineOne is built upon the following three key techniques: 1) a new design of the keypoint heatmap to facilitate simultaneous keypoint localization and classification; 2) the use of attention modules to better differentiate the representations between discs and vertebrae; and 3) a novel gradient-guided objective association mechanism to associate multiple learning objectives at the later training stage.
arXiv Detail & Related papers (2021-10-28T12:59:06Z) - Modality-aware Mutual Learning for Multi-modal Medical Image
Segmentation [12.308579499188921]
Liver cancer is one of the most common cancers worldwide.
In this paper, we focus on improving automated liver tumor segmentation by integrating multi-modal CT images.
We propose a novel mutual learning (ML) strategy for effective and robust liver tumor segmentation.
arXiv Detail & Related papers (2021-07-21T02:24:31Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.