DeepCERES: A Deep learning method for cerebellar lobule segmentation
using ultra-high resolution multimodal MRI
- URL: http://arxiv.org/abs/2401.12074v2
- Date: Tue, 23 Jan 2024 15:23:03 GMT
- Title: DeepCERES: A Deep learning method for cerebellar lobule segmentation
using ultra-high resolution multimodal MRI
- Authors: Sergio Morell-Ortega, Marina Ruiz-Perez, Marien Gadea, Roberto
Vivo-Hernando, Gregorio Rubio, Fernando Aparici, Maria de la Iglesia-Vaya,
Gwenaelle Catheline, Pierrick Coup\'e, Jos\'e V. Manj\'on
- Abstract summary: This paper introduces a novel multimodal and high-resolution human brain cerebellum lobule segmentation method.
The proposed method improves cerebellum lobule segmentation through the use of a multimodal and ultra-high resolution training dataset.
DeepCERES has been developed to make available the proposed method to the scientific community requiring as input only a single T1 MR image.
- Score: 32.73124984242397
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a novel multimodal and high-resolution human brain
cerebellum lobule segmentation method. Unlike current tools that operate at
standard resolution ($1 \text{ mm}^{3}$) or using mono-modal data, the proposed
method improves cerebellum lobule segmentation through the use of a multimodal
and ultra-high resolution ($0.125 \text{ mm}^{3}$) training dataset. To develop
the method, first, a database of semi-automatically labelled cerebellum lobules
was created to train the proposed method with ultra-high resolution T1 and T2
MR images. Then, an ensemble of deep networks has been designed and developed,
allowing the proposed method to excel in the complex cerebellum lobule
segmentation task, improving precision while being memory efficient. Notably,
our approach deviates from the traditional U-Net model by exploring alternative
architectures. We have also integrated deep learning with classical machine
learning methods incorporating a priori knowledge from multi-atlas
segmentation, which improved precision and robustness. Finally, a new online
pipeline, named DeepCERES, has been developed to make available the proposed
method to the scientific community requiring as input only a single T1 MR image
at standard resolution.
Related papers
- A Multimodal Intermediate Fusion Network with Manifold Learning for
Stress Detection [1.2430809884830318]
This paper introduces an intermediate multimodal fusion network with manifold learning-based dimensionality reduction.
We compare various dimensionality reduction techniques for different variations of unimodal and multimodal networks.
We observe that the intermediate-level fusion with the Multi-Dimensional Scaling (MDS) manifold method showed promising results with an accuracy of 96.00%.
arXiv Detail & Related papers (2024-03-12T21:06:19Z) - DeepThalamus: A novel deep learning method for automatic segmentation of
brain thalamic nuclei from multimodal ultra-high resolution MRI [32.73124984242397]
We have designed and implemented a multimodal volumetric deep neural network for the segmentation of thalamic nuclei at ultra-high resolution (0.125 mm3)
A database of semiautomatically segmented thalamic nuclei was created using ultra-high resolution T1, T2 and White Matter nulled (WMn) images.
A novel Deep learning based strategy was designed to obtain the automatic segmentations and trained to improve its robustness and accuaracy.
arXiv Detail & Related papers (2024-01-15T14:59:56Z) - Deep Learning-Based Intra Mode Derivation for Versatile Video Coding [65.96100964146062]
An intelligent intra mode derivation method is proposed in this paper, termed as Deep Learning based Intra Mode Derivation (DLIMD)
The architecture of DLIMD is developed to adapt to different quantization parameter settings and variable coding blocks including non-square ones.
The proposed method can achieve 2.28%, 1.74%, and 2.18% bit rate reduction on average for Y, U, and V components on the platform of Versatile Video Coding (VVC) test model.
arXiv Detail & Related papers (2022-04-08T13:23:59Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z) - A Genetic Algorithm based Kernel-size Selection Approach for a
Multi-column Convolutional Neural Network [11.040847116812046]
We introduce a genetic algorithm-based technique to reduce the efforts of finding the optimal combination of a hyper-parameter ( Kernel size) of a convolutional neural network-based architecture.
The method is evaluated on three popular datasets of different handwritten Bangla characters and digits.
arXiv Detail & Related papers (2019-12-28T05:37:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.