A study of CNN capacity applied to Left Venticle Segmentation in Cardiac
MRI
- URL: http://arxiv.org/abs/2107.01318v1
- Date: Sat, 3 Jul 2021 00:56:21 GMT
- Title: A study of CNN capacity applied to Left Venticle Segmentation in Cardiac
MRI
- Authors: Marcelo Toledo, Daniel Lima, Jos\'e Krieger, Marco Gutierrez
- Abstract summary: CNN models have been successfully used for segmentation of the left ventricle (LV) in cardiac MRI (Magnetic Resonance Imaging)
Two questions arise with deployment of CNNs: 1) when is it better to use a shallow model instead of a deeper one?
We propose a framework to answer them, by experimenting with deep and shallow versions of three U-Net families, trained from scratch in six subsets varying from 100 to 10,000 images, different network sizes, learning rates and regularization values.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: CNN (Convolutional Neural Network) models have been successfully used for
segmentation of the left ventricle (LV) in cardiac MRI (Magnetic Resonance
Imaging), providing clinical measurements.In practice, two questions arise with
deployment of CNNs: 1) when is it better to use a shallow model instead of a
deeper one? 2) how the size of a dataset might change the network performance?
We propose a framework to answer them, by experimenting with deep and shallow
versions of three U-Net families, trained from scratch in six subsets varying
from 100 to 10,000 images, different network sizes, learning rates and
regularization values. 1620 models were evaluated using 5-foldcross-validation
by loss and DICE. The results indicate that: sample size affects performance
more than architecture or hyper-parameters; in small samples the performance is
more sensitive to hyper-parameters than architecture; the performance
difference between shallow and deeper networks is not the same across families.
Related papers
- Lightweight image segmentation for echocardiography [0.45360533198417524]
We developed a lightweight U-Net that achieves statistically equivalent performance to nnU-Net on CAMUS.<n>Our analysis revealed that simple affine augmentations and deep supervision drive performance, while complex augmentations and large model capacity offer diminishing returns.
arXiv Detail & Related papers (2025-09-03T18:33:28Z) - MedSegMamba: 3D CNN-Mamba Hybrid Architecture for Brain Segmentation [15.514511820130474]
We develop a 3D patch-based hybrid CNN-Mamba model for subcortical brain segmentation.
Our model's performance was validated against several benchmarks.
arXiv Detail & Related papers (2024-09-12T02:19:19Z) - Improved Generalization of Weight Space Networks via Augmentations [53.87011906358727]
Learning in deep weight spaces (DWS) is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs)
We empirically analyze the reasons for this overfitting and find that a key reason is the lack of diversity in DWS datasets.
To address this, we explore strategies for data augmentation in weight spaces and propose a MixUp method adapted for weight spaces.
arXiv Detail & Related papers (2024-02-06T15:34:44Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - Improving Across-Dataset Brain Tissue Segmentation Using Transformer [10.838458766450989]
This study introduces a novel CNN-Transformer hybrid architecture designed for brain tissue segmentation.
We validate our model's performance across four multi-site T1w MRI datasets.
arXiv Detail & Related papers (2022-01-21T15:16:39Z) - Greedy Network Enlarging [53.319011626986004]
We propose a greedy network enlarging method based on the reallocation of computations.
With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs.
With application of our method on GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies.
arXiv Detail & Related papers (2021-07-31T08:36:30Z) - Benchmarking CNN on 3D Anatomical Brain MRI: Architectures, Data
Augmentation and Deep Ensemble Learning [2.1446056201053185]
We propose an extensive benchmark of recent state-of-the-art (SOTA) 3D CNN, evaluating also the benefits of data augmentation and deep ensemble learning.
Experiments were conducted on a large multi-site 3D brain anatomical MRI data-set comprising N=10k scans on 3 challenging tasks: age prediction, sex classification, and schizophrenia diagnosis.
We found that all models provide significantly better predictions with VBM images than quasi-raw data.
DenseNet and tiny-DenseNet, a lighter version that we proposed, provide a good compromise in terms of performance in all data regime
arXiv Detail & Related papers (2021-06-02T13:00:35Z) - CFPNet-M: A Light-Weight Encoder-Decoder Based Network for Multimodal
Biomedical Image Real-Time Segmentation [0.0]
We developed a novel light-weight architecture -- Channel-wise Feature Pyramid Network for Medicine.
It achieves comparable segmentation results on all five medical datasets with only 0.65 million parameters, which is about 2% of U-Net, and 8.8 MB memory.
arXiv Detail & Related papers (2021-05-10T02:29:11Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.