Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge
- URL: http://arxiv.org/abs/2007.02096v2
- Date: Sat, 11 Jul 2020 13:24:15 GMT
- Title: Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge
- Authors: Yue Sun, Kun Gao, Zhengwang Wu, Zhihao Lei, Ying Wei, Jun Ma, Xiaoping
Yang, Xue Feng, Li Zhao, Trung Le Phan, Jitae Shin, Tao Zhong, Yu Zhang,
Lequan Yu, Caizi Li, Ramesh Basnet, M. Omair Ahmad, M.N.S. Swamy, Wenao Ma,
Qi Dou, Toan Duc Bui, Camilo Bermudez Noguera, Bennett Landman (Senior
Member, IEEE), Ian H. Gotlib, Kathryn L. Humphreys, Sarah Shultz, Longchuan
Li, Sijie Niu, Weili Lin, Valerie Jewells, Gang Li (Senior Member, IEEE),
Dinggang Shen (Fellow, IEEE), Li Wang (Senior Member, IEEE)
- Abstract summary: iSeg 2019 challenge provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods.
By the time of writing, there are 30 automatic segmentation methods participating in iSeg 2019.
We review the 8 top-ranked teams by detailing their pipelines/implementations, presenting experimental results and evaluating performance in terms of the whole brain, regions of interest, and gyral landmark curves.
- Score: 53.48285637256203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To better understand early brain growth patterns in health and disorder, it
is critical to accurately segment infant brain magnetic resonance (MR) images
into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep
learning-based methods have achieved state-of-the-art performance; however, one
of major limitations is that the learning-based methods may suffer from the
multi-site issue, that is, the models trained on a dataset from one site may
not be applicable to the datasets acquired from other sites with different
imaging protocols/scanners. To promote methodological development in the
community, iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of
6-month infant subjects from multiple sites with different protocols/scanners
for the participating methods. Training/validation subjects are from UNC (MAP)
and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory
University. By the time of writing, there are 30 automatic segmentation methods
participating in iSeg-2019. We review the 8 top-ranked teams by detailing their
pipelines/implementations, presenting experimental results and evaluating
performance in terms of the whole brain, regions of interest, and gyral
landmark curves. We also discuss their limitations and possible future
directions for the multi-site issue. We hope that the multi-site dataset in
iSeg-2019 and this review article will attract more researchers on the
multi-site issue.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - On Enhancing Brain Tumor Segmentation Across Diverse Populations with Convolutional Neural Networks [0.9304666952022026]
This work proposes a brain tumor segmentation method as part of the BraTS-GoAT challenge.
The task is to segment tumors in brain MRI scans automatically from various populations, such as adults, pediatrics, and underserved sub-Saharan Africa.
Our experiments show that our method performs well on the unseen validation set with an average DSC of 85.54% and HD95 of 27.88.
arXiv Detail & Related papers (2024-05-05T08:55:00Z) - Brain Tumor Segmentation from MRI Images using Deep Learning Techniques [3.1498833540989413]
A public MRI dataset contains 3064 TI-weighted images from 233 patients with three variants of brain tumor, viz. meningioma, glioma, and pituitary tumor.
The dataset files were converted and preprocessed before indulging into the methodology which employs implementation and training of some well-known image segmentation deep learning models.
The experimental findings showed that among all the applied approaches, the recurrent residual U-Net which uses Adam reaches a Mean Intersection Over Union of 0.8665 and outperforms other compared state-of-the-art deep learning models.
arXiv Detail & Related papers (2023-04-29T13:33:21Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - CAS-Net: Conditional Atlas Generation and Brain Segmentation for Fetal
MRI [10.127399319119911]
We propose a novel network structure that can simultaneously generate conditional atlases and predict brain tissue segmentation.
The proposed method is trained and evaluated on 253 subjects from the developing Human Connectome Project.
arXiv Detail & Related papers (2022-05-17T11:23:02Z) - Fetal Brain Tissue Annotation and Segmentation Challenge Results [35.575646854499716]
In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain.
We organized the Tissue Fetal (FeTA) Challenge in 2021 to encourage the development of automatic segmentation algorithms.
This paper provides a detailed analysis of the results from both a technical and clinical perspective.
arXiv Detail & Related papers (2022-04-20T16:14:43Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Deep Learning Based Brain Tumor Segmentation: A Survey [26.933777009547047]
Brain tumor segmentation is one of the most challenging problems in medical image analysis.
Deep learning methods have shown promising performance in solving various computer vision problems.
More than 100 scientific papers are selected and discussed in this survey.
arXiv Detail & Related papers (2020-07-18T17:14:50Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.