Fetal Brain Tissue Annotation and Segmentation Challenge Results
- URL: http://arxiv.org/abs/2204.09573v1
- Date: Wed, 20 Apr 2022 16:14:43 GMT
- Title: Fetal Brain Tissue Annotation and Segmentation Challenge Results
- Authors: Kelly Payette, Hongwei Li, Priscille de Dumast, Roxane Licandro, Hui
Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao Liu,
Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming
Dong, Hao Fu, Guotai Wang, ZunHyan Rieu, Donghyeon Kim, Hyun Gi Kim, Davood
Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, Jo\~ao L.
Vila\c{c}a, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas
Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia
Aleny\`a, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida,
Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum,
Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, KuanLun Liao,
YiXuan Wu, JinTai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze,
Meritxell Bach Cuadra, Andras Jakab
- Abstract summary: In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain.
We organized the Tissue Fetal (FeTA) Challenge in 2021 to encourage the development of automatic segmentation algorithms.
This paper provides a detailed analysis of the results from both a technical and clinical perspective.
- Score: 35.575646854499716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In-utero fetal MRI is emerging as an important tool in the diagnosis and
analysis of the developing human brain. Automatic segmentation of the
developing fetal brain is a vital step in the quantitative analysis of prenatal
neurodevelopment both in the research and clinical context. However, manual
segmentation of cerebral structures is time-consuming and prone to error and
inter-observer variability. Therefore, we organized the Fetal Tissue Annotation
(FeTA) Challenge in 2021 in order to encourage the development of automatic
segmentation algorithms on an international level. The challenge utilized FeTA
Dataset, an open dataset of fetal brain MRI reconstructions segmented into
seven different tissues (external cerebrospinal fluid, grey matter, white
matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international
teams participated in this challenge, submitting a total of 21 algorithms for
evaluation. In this paper, we provide a detailed analysis of the results from
both a technical and clinical perspective. All participants relied on deep
learning methods, mainly U-Nets, with some variability present in the network
architecture, optimization, and image pre- and post-processing. The majority of
teams used existing medical imaging deep learning frameworks. The main
differences between the submissions were the fine tuning done during training,
and the specific pre- and post-processing steps performed. The challenge
results showed that almost all submissions performed similarly. Four of the top
five teams used ensemble learning methods. However, one team's algorithm
performed significantly superior to the other submissions, and consisted of an
asymmetrical U-Net network architecture. This paper provides a first of its
kind benchmark for future automatic multi-tissue segmentation algorithms for
the developing human brain in utero.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - BrainSegFounder: Towards 3D Foundation Models for Neuroimage Segmentation [6.5388528484686885]
This study introduces a novel approach towards the creation of medical foundation models.
Our method involves a novel two-stage pretraining approach using vision transformers.
BrainFounder demonstrates a significant performance gain, surpassing the achievements of previous winning solutions.
arXiv Detail & Related papers (2024-06-14T19:49:45Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Automatic Classification of Alzheimer's Disease using brain MRI data and
deep Convolutional Neural Networks [0.0]
Alzheimer's disease (AD) is one of the most common public health issues the world is facing today.
This paper explores the construction of several deep learning architectures evaluated on brain MRI images and segmented images.
arXiv Detail & Related papers (2022-03-31T20:15:51Z) - An automatic multi-tissue human fetal brain segmentation benchmark using
the Fetal Tissue Annotation Dataset [10.486148937249837]
We introduce a publicly available database of 50 manually segmented pathological and non-pathological fetal magnetic resonance brain volume reconstructions.
We quantitatively evaluate the accuracy of several automatic multi-tissue segmentation algorithms of the developing human fetal brain.
arXiv Detail & Related papers (2020-10-29T12:46:05Z) - Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge [53.48285637256203]
iSeg 2019 challenge provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods.
By the time of writing, there are 30 automatic segmentation methods participating in iSeg 2019.
We review the 8 top-ranked teams by detailing their pipelines/implementations, presenting experimental results and evaluating performance in terms of the whole brain, regions of interest, and gyral landmark curves.
arXiv Detail & Related papers (2020-07-04T13:39:48Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - VerSe: A Vertebrae Labelling and Segmentation Benchmark for
Multi-detector CT Images [121.31355003451152]
Large Scale Vertebrae Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020.
We present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view.
arXiv Detail & Related papers (2020-01-24T21:09:18Z) - Transfer Learning for Brain Tumor Segmentation [0.6408773096179187]
Gliomas are the most common malignant brain tumors that are treated with chemoradiotherapy and surgery.
Recent advances in deep learning have led to convolutional neural network architectures that excel at various visual recognition tasks.
In this work, we construct FCNs with pretrained convolutional encoders. We show that we can stabilize the training process this way and achieve an improvement with respect to dice scores and Hausdorff distances.
arXiv Detail & Related papers (2019-12-28T12:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.