Edge-aware Multi-task Network for Integrating Quantification
Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality
Non-contrast MRI
- URL: http://arxiv.org/abs/2307.01798v1
- Date: Tue, 4 Jul 2023 16:08:18 GMT
- Title: Edge-aware Multi-task Network for Integrating Quantification
Segmentation and Uncertainty Prediction of Liver Tumor on Multi-modality
Non-contrast MRI
- Authors: Xiaojiao Xiao, Qinmin Hu, Guanghui Wang
- Abstract summary: This paper proposes a unified framework, namely edge-aware multi-task network (EaMtNet) to associate multi-index quantification, segmentation, and uncertainty of liver tumors.
The proposed model outperforms the state-of-the-art by a large margin, achieving a dice similarity coefficient of 90.01$pm$1.23 and a mean absolute error of 2.72$pm$0.58 mm for MD.
- Score: 21.57865822575582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneous multi-index quantification, segmentation, and uncertainty
estimation of liver tumors on multi-modality non-contrast magnetic resonance
imaging (NCMRI) are crucial for accurate diagnosis. However, existing methods
lack an effective mechanism for multi-modality NCMRI fusion and accurate
boundary information capture, making these tasks challenging. To address these
issues, this paper proposes a unified framework, namely edge-aware multi-task
network (EaMtNet), to associate multi-index quantification, segmentation, and
uncertainty of liver tumors on the multi-modality NCMRI. The EaMtNet employs
two parallel CNN encoders and the Sobel filters to extract local features and
edge maps, respectively. The newly designed edge-aware feature aggregation
module (EaFA) is used for feature fusion and selection, making the network
edge-aware by capturing long-range dependency between feature and edge maps.
Multi-tasking leverages prediction discrepancy to estimate uncertainty and
improve segmentation and quantification performance. Extensive experiments are
performed on multi-modality NCMRI with 250 clinical subjects. The proposed
model outperforms the state-of-the-art by a large margin, achieving a dice
similarity coefficient of 90.01$\pm$1.23 and a mean absolute error of
2.72$\pm$0.58 mm for MD. The results demonstrate the potential of EaMtNet as a
reliable clinical-aided tool for medical image analysis.
Related papers
- Modality-agnostic Domain Generalizable Medical Image Segmentation by Multi-Frequency in Multi-Scale Attention [1.1155836879100416]
We propose a Modality-agnostic Domain Generalizable Network (MADGNet) for medical image segmentation.
MFMSA block refines the process of spatial feature extraction, particularly in capturing boundary features.
E-SDM mitigates information loss in multi-task learning with deep supervision.
arXiv Detail & Related papers (2024-05-10T07:34:36Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - United adversarial learning for liver tumor segmentation and detection
of multi-modality non-contrast MRI [5.857654010519764]
We propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI.
The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection.
The proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator.
arXiv Detail & Related papers (2022-01-07T18:54:07Z) - Inconsistency-aware Uncertainty Estimation for Semi-supervised Medical
Image Segmentation [92.9634065964963]
We present a new semi-supervised segmentation model, namely, conservative-radical network (CoraNet) based on our uncertainty estimation and separate self-training strategy.
Compared with the current state of the art, our CoraNet has demonstrated superior performance.
arXiv Detail & Related papers (2021-10-17T08:49:33Z) - Multi-Modal Multi-Instance Learning for Retinal Disease Recognition [10.294738095942812]
We aim to build a deep neural network that recognizes multiple vision-threatening diseases for the given case.
As both data acquisition and manual labeling are extremely expensive in the medical domain, the network has to be relatively lightweight.
arXiv Detail & Related papers (2021-09-25T08:16:47Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - DFENet: A Novel Dimension Fusion Edge Guided Network for Brain MRI
Segmentation [0.0]
We propose a novel Dimension Fusion Edge-guided network (DFENet) that can meet both of these requirements by fusing the features of 2D and 3D CNNs.
The proposed model is robust, accurate, superior to the existing methods, and can be relied upon for biomedical applications.
arXiv Detail & Related papers (2021-05-17T15:43:59Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.