Search Wide, Focus Deep: Automated Fetal Brain Extraction with Sparse Training Data
- URL: http://arxiv.org/abs/2410.20532v2
- Date: Tue, 29 Oct 2024 17:36:33 GMT
- Title: Search Wide, Focus Deep: Automated Fetal Brain Extraction with Sparse Training Data
- Authors: Javid Dadashkarimi, Valeria Pena Trujillo, Camilo Jaimes, Lilla Zöllei, Malte Hoffmann,
- Abstract summary: We propose a test-time strategy that reduces false positives in networks trained on sparse, synthetic labels.
We train models at different window sizes using synthetic images derived from a small number of fetal brain label maps.
Our framework matches state-of-the-art brain extraction methods on clinical HASTE scans of third-trimester fetuses.
- Score: 1.0252723257176566
- License:
- Abstract: Automated fetal brain extraction from full-uterus MRI is a challenging task due to variable head sizes, orientations, complex anatomy, and prevalent artifacts. While deep-learning (DL) models trained on synthetic images have been successful in adult brain extraction, adapting these networks for fetal MRI is difficult due to the sparsity of labeled data, leading to increased false-positive predictions. To address this challenge, we propose a test-time strategy that reduces false positives in networks trained on sparse, synthetic labels. The approach uses a breadth-fine search (BFS) to identify a subvolume likely to contain the fetal brain, followed by a deep-focused sliding window (DFS) search to refine the extraction, pooling predictions to minimize false positives. We train models at different window sizes using synthetic images derived from a small number of fetal brain label maps, augmented with random geometric shapes. Each model is trained on diverse head positions and scales, including cases with partial or no brain tissue. Our framework matches state-of-the-art brain extraction methods on clinical HASTE scans of third-trimester fetuses and exceeds them by up to 5\% in terms of Dice in the second trimester as well as EPI scans across both trimesters. Our results demonstrate the utility of a sliding-window approach and combining predictions from several models trained on synthetic images, for improving brain-extraction accuracy by progressively refining regions of interest and minimizing the risk of missing brain mask slices or misidentifying other tissues as brain.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Self-supervised Brain Lesion Generation for Effective Data Augmentation of Medical Images [0.9626666671366836]
We propose a framework to efficiently generate new samples for training a brain lesion segmentation model.
We first train a lesion generator, based on an adversarial autoencoder, in a self-supervised manner.
Next, we utilize a novel image composition algorithm, Soft Poisson Blending, to seamlessly combine synthetic lesions and brain images.
arXiv Detail & Related papers (2024-06-21T01:53:12Z) - Fetal-BET: Brain Extraction Tool for Fetal MRI [4.214523989654048]
We build a large annotated dataset of approximately 72,000 2D fetal brain MRI images.
Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures.
Our approach leverages the rich information from multi-contrast (multi-sequence) fetal MRI data, enabling precise delineation of the fetal brain structures.
arXiv Detail & Related papers (2023-10-02T18:14:23Z) - Tissue Segmentation of Thick-Slice Fetal Brain MR Scans with Guidance
from High-Quality Isotropic Volumes [52.242103848335354]
We propose a novel Cycle-Consistent Domain Adaptation Network (C2DA-Net) to efficiently transfer the knowledge learned from high-quality isotropic volumes for accurate tissue segmentation of thick-slice scans.
Our C2DA-Net can fully utilize a small set of annotated isotropic volumes to guide tissue segmentation on unannotated thick-slice scans.
arXiv Detail & Related papers (2023-08-13T12:51:15Z) - Patched Diffusion Models for Unsupervised Anomaly Detection in Brain MRI [55.78588835407174]
We propose a method that reformulates the generation task of diffusion models as a patch-based estimation of healthy brain anatomy.
We evaluate our approach on data of tumors and multiple sclerosis lesions and demonstrate a relative improvement of 25.1% compared to existing baselines.
arXiv Detail & Related papers (2023-03-07T09:40:22Z) - CAS-Net: Conditional Atlas Generation and Brain Segmentation for Fetal
MRI [10.127399319119911]
We propose a novel network structure that can simultaneously generate conditional atlases and predict brain tissue segmentation.
The proposed method is trained and evaluated on 253 subjects from the developing Human Connectome Project.
arXiv Detail & Related papers (2022-05-17T11:23:02Z) - Learning to segment fetal brain tissue from noisy annotations [6.456673654519456]
Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage.
Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation.
However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures.
arXiv Detail & Related papers (2022-03-25T21:22:24Z) - SynthStrip: Skull-Stripping for Any Brain Image [7.846209440615028]
We introduce SynthStrip, a rapid, learning-based brain-extraction tool.
By leveraging anatomical segmentations, SynthStrip generates a synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images.
We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model.
arXiv Detail & Related papers (2022-03-18T14:08:20Z) - Cross-Modality Neuroimage Synthesis: A Survey [71.27193056354741]
Multi-modality imaging improves disease diagnosis and reveals distinct deviations in tissues with anatomical properties.
The existence of completely aligned and paired multi-modality neuroimaging data has proved its effectiveness in brain research.
An alternative solution is to explore unsupervised or weakly supervised learning methods to synthesize the absent neuroimaging data.
arXiv Detail & Related papers (2022-02-14T19:29:08Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z) - Hybrid Attention for Automatic Segmentation of Whole Fetal Head in
Prenatal Ultrasound Volumes [52.53375964591765]
We propose the first fully-automated solution to segment the whole fetal head in US volumes.
The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture.
We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features.
arXiv Detail & Related papers (2020-04-28T14:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.