SynthSeg: Domain Randomisation for Segmentation of Brain MRI Scans of
any Contrast and Resolution
- URL: http://arxiv.org/abs/2107.09559v1
- Date: Tue, 20 Jul 2021 15:22:16 GMT
- Title: SynthSeg: Domain Randomisation for Segmentation of Brain MRI Scans of
any Contrast and Resolution
- Authors: Benjamin Billot, Douglas N. Greve, Oula Puonti, Axel Thielscher, Koen
Van Leemput, Bruce Fischl, Adrian V. Dalca, Juan Eugenio Iglesias
- Abstract summary: Convolutional neural networks (CNNs) have difficulties generalising to unseen target domains.
We introduce SynthSeg, the first segmentation CNN to brain MRI scans of any contrast and resolution.
We demonstrate SynthSeg on 5,500 scans of 6 modalities and 10 resolutions, where it exhibits unparalleled generalisation.
- Score: 7.070890465817133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite advances in data augmentation and transfer learning, convolutional
neural networks (CNNs) have difficulties generalising to unseen target domains.
When applied to segmentation of brain MRI scans, CNNs are highly sensitive to
changes in resolution and contrast: even within the same MR modality, decreases
in performance can be observed across datasets. We introduce SynthSeg, the
first segmentation CNN agnostic to brain MRI scans of any contrast and
resolution. SynthSeg is trained with synthetic data sampled from a generative
model inspired by Bayesian segmentation. Crucially, we adopt a \textit{domain
randomisation} strategy where we fully randomise the generation parameters to
maximise the variability of the training data. Consequently, SynthSeg can
segment preprocessed and unpreprocessed real scans of any target domain,
without retraining or fine-tuning. Because SynthSeg only requires segmentations
to be trained (no images), it can learn from label maps obtained automatically
from existing datasets of different populations (e.g., with atrophy and
lesions), thus achieving robustness to a wide range of morphological
variability. We demonstrate SynthSeg on 5,500 scans of 6 modalities and 10
resolutions, where it exhibits unparalleled generalisation compared to
supervised CNNs, test time adaptation, and Bayesian segmentation. The code and
trained model are available at https://github.com/BBillot/SynthSeg.
Related papers
- Diffusion-based Data Augmentation for Nuclei Image Segmentation [68.28350341833526]
We introduce the first diffusion-based augmentation method for nuclei segmentation.
The idea is to synthesize a large number of labeled images to facilitate training the segmentation model.
The experimental results show that by augmenting 10% labeled real dataset with synthetic samples, one can achieve comparable segmentation results.
arXiv Detail & Related papers (2023-10-22T06:16:16Z) - Domain Adaptive Synapse Detection with Weak Point Annotations [63.97144211520869]
We present AdaSyn, a framework for domain adaptive synapse detection with weak point annotations.
In the WASPSYN challenge at I SBI 2023, our method ranks the 1st place.
arXiv Detail & Related papers (2023-08-31T05:05:53Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Robust machine learning segmentation for large-scale analysis of
heterogeneous clinical brain MRI datasets [1.0499611180329802]
We present SynthSeg+, an AI segmentation suite that enables robust analysis of heterogeneous clinical datasets.
Specifically, in addition to whole-brain segmentation, SynthSeg+ also performs cortical parcellation, intracranial volume estimation, and automated detection of faulty segmentations.
We demonstrate SynthSeg+ in seven experiments, including an ageing study on 14,000 scans, where it accurately replicates atrophy patterns observed on data of much higher quality.
arXiv Detail & Related papers (2022-09-05T16:09:24Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Robust Segmentation of Brain MRI in the Wild with Hierarchical CNNs and
no Retraining [1.0499611180329802]
Retrospective analysis of brain MRI scans acquired in the clinic has the potential to enable neuroimaging studies with sample sizes much larger than those found in research datasets.
Recent advances in convolutional neural networks (CNNs) and domain randomisation for image segmentation may enable morphometry of clinical MRI at scale.
We show that SynthSeg is generally robust, but frequently falters on scans with low signal-to-noise ratio or poor tissue contrast.
We propose SynthSeg+, a novel method that greatly mitigates these problems using a hierarchy of conditional segmentation and denoising CNNs.
arXiv Detail & Related papers (2022-03-03T19:18:28Z) - Improving Across-Dataset Brain Tissue Segmentation Using Transformer [10.838458766450989]
This study introduces a novel CNN-Transformer hybrid architecture designed for brain tissue segmentation.
We validate our model's performance across four multi-site T1w MRI datasets.
arXiv Detail & Related papers (2022-01-21T15:16:39Z) - nnFormer: Interleaved Transformer for Volumetric Segmentation [50.10441845967601]
We introduce nnFormer, a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution.
nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC.
arXiv Detail & Related papers (2021-09-07T17:08:24Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Spherical coordinates transformation pre-processing in Deep Convolution
Neural Networks for brain tumor segmentation in MRI [0.0]
Deep Convolutional Neural Networks (DCNN) have recently shown very promising results.
DCNN models need large annotated datasets to achieve good performance.
In this work, a 3D Spherical coordinates transform has been hypothesized to improve DCNN models' accuracy.
arXiv Detail & Related papers (2020-08-17T05:11:05Z) - A Learning Strategy for Contrast-agnostic MRI Segmentation [8.264160978159634]
We present a deep learning strategy that enables, for the first time, contrast-agnostic semantic segmentation of unpreprocessed brain MRI scans.
Our proposed learning method, SynthSeg, generates synthetic sample images of widely varying contrasts on the fly during training.
We evaluate our approach on four datasets comprising over 1,000 subjects and four types of MR contrast.
arXiv Detail & Related papers (2020-03-04T11:00:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.