A Radiomics-Incorporated Deep Ensemble Learning Model for
Multi-Parametric MRI-based Glioma Segmentation
- URL: http://arxiv.org/abs/2303.10533v1
- Date: Sun, 19 Mar 2023 02:16:55 GMT
- Title: A Radiomics-Incorporated Deep Ensemble Learning Model for
Multi-Parametric MRI-based Glioma Segmentation
- Authors: Yang Chen, Zhenyu Yang, Jingtong Zhao, Justus Adamson, Yang Sheng,
Fang-Fang Yin, Chunhao Wang
- Abstract summary: We developed a deep ensemble learning model with a radiomics spatial encoding execution for improved glioma segmentation accuracy.
This model was developed using 369 glioma patients with a 4-modality mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR.
- Score: 5.890417404600585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We developed a deep ensemble learning model with a radiomics spatial encoding
execution for improved glioma segmentation accuracy using multi-parametric MRI
(mp-MRI). This model was developed using 369 glioma patients with a 4-modality
mp-MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. In each
modality volume, a 3D sliding kernel was implemented across the brain to
capture image heterogeneity: fifty-six radiomic features were extracted within
the kernel, resulting in a 4th order tensor. Each radiomic feature can then be
encoded as a 3D image volume, namely a radiomic feature map (RFM). PCA was
employed for data dimension reduction and the first 4 PCs were selected. Four
deep neural networks as sub-models following the U-Net architecture were
trained for the segmenting of a region-of-interest (ROI): each sub-model
utilizes the mp-MRI and 1 of the 4 PCs as a 5-channel input for a 2D execution.
The 4 softmax probability results given by the U-net ensemble were superimposed
and binarized by Otsu method as the segmentation result. Three ensemble models
were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor
(WT). The adopted radiomics spatial encoding execution enriches the image
heterogeneity information that leads to the successful demonstration of the
proposed deep ensemble model, which offers a new tool for mp-MRI based medical
image segmentation.
Related papers
- Brain Tumor Segmentation in MRI Images with 3D U-Net and Contextual Transformer [0.5033155053523042]
This research presents an enhanced approach for precise segmentation of brain tumor masses in magnetic resonance imaging (MRI) using an advanced 3D-UNet model combined with a Context Transformer (CoT)
The proposed model synchronizes tumor mass characteristics from CoT, mutually reinforcing feature extraction, facilitating the precise capture of detailed tumor mass structures.
Several experimental results present the outstanding segmentation performance of the proposed method in comparison to current state-of-the-art approaches.
arXiv Detail & Related papers (2024-07-11T13:04:20Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - SDR-Former: A Siamese Dual-Resolution Transformer for Liver Lesion
Classification Using 3D Multi-Phase Imaging [59.78761085714715]
This study proposes a novel Siamese Dual-Resolution Transformer (SDR-Former) framework for liver lesion classification.
The proposed framework has been validated through comprehensive experiments on two clinical datasets.
To support the scientific community, we are releasing our extensive multi-phase MR dataset for liver lesion analysis to the public.
arXiv Detail & Related papers (2024-02-27T06:32:56Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - ME-Net: Multi-Encoder Net Framework for Brain Tumor Segmentation [6.643336433892116]
We propose a model for brain tumor segmentation with multiple encoders.
Four encoders correspond to the four modalities of the MRI image, perform one-to-one feature extraction, and then merge the feature maps of the four modalities into the decoder.
We also introduced a new loss function named "Categorical Dice", and set different weights for different segmented regions at the same time, which solved the problem of voxel imbalance.
arXiv Detail & Related papers (2022-03-21T14:42:05Z) - A Neural Ordinary Differential Equation Model for Visualizing Deep
Neural Network Behaviors in Multi-Parametric MRI based Glioma Segmentation [3.1435638364138105]
We develop a neural ordinary differential equation (ODE) model for visualizing deep neural network (DNN) during multi-parametric MRI (mp-MRI) based glioma segmentation.
All neural ODE models successfully illustrated image dynamics as expected.
arXiv Detail & Related papers (2022-03-01T17:16:41Z) - A unified 3D framework for Organs at Risk Localization and Segmentation
for Radiation Therapy Planning [56.52933974838905]
Current medical workflow requires manual delineation of organs-at-risk (OAR)
In this work, we aim to introduce a unified 3D pipeline for OAR localization-segmentation.
Our proposed framework fully enables the exploitation of 3D context information inherent in medical imaging.
arXiv Detail & Related papers (2022-03-01T17:08:41Z) - Hierarchical 3D Feature Learning for Pancreas Segmentation [11.588903060674344]
We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans.
Our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%.
Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding.
arXiv Detail & Related papers (2021-09-03T09:27:07Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Convolutional 3D to 2D Patch Conversion for Pixel-wise Glioma
Segmentation in MRI Scans [22.60715394470069]
We devise a novel pixel-wise segmentation framework through a convolutional 3D to 2D MR patch conversion model.
In our architecture, both local inter-slice and global intra-slice features are jointly exploited to predict class label of the central voxel in a given patch.
arXiv Detail & Related papers (2020-10-20T20:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.