Learning joint segmentation of tissues and brain lesions from
task-specific hetero-modal domain-shifted datasets
- URL: http://arxiv.org/abs/2009.04009v1
- Date: Tue, 8 Sep 2020 22:00:00 GMT
- Title: Learning joint segmentation of tissues and brain lesions from
task-specific hetero-modal domain-shifted datasets
- Authors: Reuben Dorent, Thomas Booth, Wenqi Li, Carole H. Sudre, Sina
Kafiabadi, Jorge Cardoso, Sebastien Ourselin, Tom Vercauteren
- Abstract summary: We propose a novel approach to build a joint tissue and lesion segmentation model from aggregated task-specific datasets.
We show how the expected risk can be decomposed and optimised empirically.
For each individual task, our joint approach reaches comparable performance to task-specific and fully-supervised models.
- Score: 6.049813979681482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Brain tissue segmentation from multimodal MRI is a key building block of many
neuroimaging analysis pipelines. Established tissue segmentation approaches
have, however, not been developed to cope with large anatomical changes
resulting from pathology, such as white matter lesions or tumours, and often
fail in these cases. In the meantime, with the advent of deep neural networks
(DNNs), segmentation of brain lesions has matured significantly. However, few
existing approaches allow for the joint segmentation of normal tissue and brain
lesions. Developing a DNN for such a joint task is currently hampered by the
fact that annotated datasets typically address only one specific task and rely
on task-specific imaging protocols including a task-specific set of imaging
modalities. In this work, we propose a novel approach to build a joint tissue
and lesion segmentation model from aggregated task-specific hetero-modal
domain-shifted and partially-annotated datasets. Starting from a variational
formulation of the joint problem, we show how the expected risk can be
decomposed and optimised empirically. We exploit an upper bound of the risk to
deal with heterogeneous imaging modalities across datasets. To deal with
potential domain shift, we integrated and tested three conventional techniques
based on data augmentation, adversarial learning and pseudo-healthy generation.
For each individual task, our joint approach reaches comparable performance to
task-specific and fully-supervised models. The proposed framework is assessed
on two different types of brain lesions: White matter lesions and gliomas. In
the latter case, lacking a joint ground-truth for quantitative assessment
purposes, we propose and use a novel clinically-relevant qualitative assessment
methodology.
Related papers
- KA$^2$ER: Knowledge Adaptive Amalgamation of ExpeRts for Medical Images Segmentation [5.807887214293438]
We propose an adaptive amalgamation knowledge framework that aims to train a versatile foundation model to handle the joint goals of multiple expert models.
In particular, we first train an nnUNet-based expert model for each task, and reuse the pre-trained SwinUNTER as the target foundation model.
Within the hidden layer, the hierarchical attention mechanisms are designed to achieve adaptive merging of the target model to the hidden layer feature knowledge of all experts.
arXiv Detail & Related papers (2024-10-28T14:49:17Z) - A Foundation Model for Brain Lesion Segmentation with Mixture of Modality Experts [3.208907282505264]
We propose a universal foundation model for 3D brain lesion segmentation.
We formulate a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities.
Our model outperforms state-of-the-art universal models and provides promising generalization to unseen datasets.
arXiv Detail & Related papers (2024-05-16T16:49:20Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Pain level and pain-related behaviour classification using GRU-based
sparsely-connected RNNs [61.080598804629375]
People with chronic pain unconsciously adapt specific body movements to protect themselves from injury or additional pain.
Because there is no dedicated benchmark database to analyse this correlation, we considered one of the specific circumstances that potentially influence a person's biometrics during daily activities.
We proposed a sparsely-connected recurrent neural networks (s-RNNs) ensemble with the gated recurrent unit (GRU) that incorporates multiple autoencoders.
We conducted several experiments which indicate that the proposed method outperforms the state-of-the-art approaches in classifying both pain level and pain-related behaviour.
arXiv Detail & Related papers (2022-12-20T12:56:28Z) - Generalizable multi-task, multi-domain deep segmentation of sparse
pediatric imaging datasets via multi-scale contrastive regularization and
multi-joint anatomical priors [0.41998444721319217]
We propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over multiple datasets.
We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints.
arXiv Detail & Related papers (2022-07-27T12:59:16Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - ACN: Adversarial Co-training Network for Brain Tumor Segmentation with
Missing Modalities [26.394130795896704]
We propose a novel Adversarial Co-training Network (ACN) to solve this issue.
ACN enables a coupled learning process for both full modality and missing modality to supplement each other's domain.
Our proposed method significantly outperforms all state-of-the-art methods under any missing situation.
arXiv Detail & Related papers (2021-06-28T11:53:11Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.