Segmenting Fetal Head with Efficient Fine-tuning Strategies in Low-resource Settings: an empirical study with U-Net
- URL: http://arxiv.org/abs/2407.20086v1
- Date: Mon, 29 Jul 2024 15:16:08 GMT
- Title: Segmenting Fetal Head with Efficient Fine-tuning Strategies in Low-resource Settings: an empirical study with U-Net
- Authors: Fangyijie Wang, Guénolé Silvestre, Kathleen M. Curran,
- Abstract summary: fetal head circumference is crucial for estimating fetal growth during routine prenatal screening.
Recent advancements in deep learning techniques have shown significant progress in segmenting the fetal head using encoder-decoder models.
There are still no "best-practice" guidelines for optimal fine-tuning of U-net for fetal ultrasound image segmentation.
This work summarizes existing fine-tuning strategies with various backbone architectures, model components, and fine-tuning strategies across ultrasound data from Netherlands, Spain, Malawi, Egypt and Algeria.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Accurate measurement of fetal head circumference is crucial for estimating fetal growth during routine prenatal screening. Prior to measurement, it is necessary to accurately identify and segment the region of interest, specifically the fetal head, in ultrasound images. Recent advancements in deep learning techniques have shown significant progress in segmenting the fetal head using encoder-decoder models. Among these models, U-Net has become a standard approach for accurate segmentation. However, training an encoder-decoder model can be a time-consuming process that demands substantial computational resources. Moreover, fine-tuning these models is particularly challenging when there is a limited amount of data available. There are still no "best-practice" guidelines for optimal fine-tuning of U-net for fetal ultrasound image segmentation. This work summarizes existing fine-tuning strategies with various backbone architectures, model components, and fine-tuning strategies across ultrasound data from Netherlands, Spain, Malawi, Egypt and Algeria. Our study shows that (1) fine-tuning U-Net leads to better performance than training from scratch, (2) fine-tuning strategies in decoder are superior to other strategies, (3) network architecture with less number of parameters can achieve similar or better performance. We also demonstrate the effectiveness of fine-tuning strategies in low-resource settings and further expand our experiments into few-shot learning. Lastly, we publicly released our code and specific fine-tuned weights.
Related papers
- Evaluate Fine-tuning Strategies for Fetal Head Ultrasound Image Segmentation with U-Net [0.0]
We propose a Transfer Learning (TL) method to train a CNN network from scratch.
Our approach involves fine-tuning (FT) a U-Net network with a lightweight MobileNet as the encoder.
Our proposed FT strategy outperforms other strategies with smaller trainable parameter sizes below 4.4 million.
arXiv Detail & Related papers (2023-07-18T08:37:58Z) - Frequency Disentangled Learning for Segmentation of Midbrain Structures
from Quantitative Susceptibility Mapping Data [1.9150304734969674]
Deep models tend to fit the target function from low to high frequencies.
One often lacks sufficient samples for training deep segmentation models.
We propose a new training method based on frequency-domain disentanglement.
arXiv Detail & Related papers (2023-02-25T04:30:11Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - EVC-Net: Multi-scale V-Net with Conditional Random Fields for Brain
Extraction [3.4376560669160394]
EVC-Net adds lower scale inputs on each encoder block.
Conditional Random Fields are re-introduced here as an additional step for refining the network's output.
Results show that even with limited training resources, EVC-Net achieves higher Dice Coefficient and Jaccard Index.
arXiv Detail & Related papers (2022-06-06T18:21:21Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Boosting Segmentation Performance across datasets using histogram
specification with application to pelvic bone segmentation [1.3750624267664155]
We propose a methodology based on modulation of image tonal distributions and deep learning to boost the performance of networks trained on limited data.
The segmentation task uses a U-Net configuration with an EfficientNet-B0 backbone, optimized using an augmented BCE-IoU loss function.
arXiv Detail & Related papers (2021-01-26T23:48:40Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Contrastive learning of global and local features for medical image
segmentation with limited annotations [10.238403787504756]
A key requirement for the success of supervised deep learning is a large labeled dataset.
We propose strategies for extending the contrastive learning framework for segmentation of medical images in the semi-supervised setting.
In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques.
arXiv Detail & Related papers (2020-06-18T13:31:26Z) - Learning Fast and Robust Target Models for Video Object Segmentation [83.3382606349118]
Video object segmentation (VOS) is a highly challenging problem since the initial mask, defining the target object, is only given at test-time.
Most previous approaches fine-tune segmentation networks on the first frame, resulting in impractical frame-rates and risk of overfitting.
We propose a novel VOS architecture consisting of two network components.
arXiv Detail & Related papers (2020-02-27T21:58:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.