Alleviating Class-wise Gradient Imbalance for Pulmonary Airway
Segmentation
- URL: http://arxiv.org/abs/2011.11952v2
- Date: Thu, 29 Apr 2021 10:15:37 GMT
- Title: Alleviating Class-wise Gradient Imbalance for Pulmonary Airway
Segmentation
- Authors: Hao Zheng, Yulei Qin, Yun Gu, Fangfang Xie, Jie Yang, Jiayuan Sun,
Guang-zhong Yang
- Abstract summary: Automated airway segmentation is a prerequisite for pre-operative diagnosis and intra-operative navigation for pulmonary intervention.
Due to the small size and scattered spatial distribution of peripheral bronchi, this is hampered by severe class imbalance between foreground and background regions.
In this paper, we demonstrate that this problem is arisen by gradient erosion and dilation of the neighborhood voxels.
We propose a General Union loss function which obviates the impact of airway size by distance-based weights and adaptively tunes the gradient ratio based on the learning process.
- Score: 35.18277191213468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated airway segmentation is a prerequisite for pre-operative diagnosis
and intra-operative navigation for pulmonary intervention. Due to the small
size and scattered spatial distribution of peripheral bronchi, this is hampered
by severe class imbalance between foreground and background regions, which
makes it challenging for CNN-based methods to parse distal small airways. In
this paper, we demonstrate that this problem is arisen by gradient erosion and
dilation of the neighborhood voxels. During back-propagation, if the ratio of
the foreground gradient to background gradient is small while the class
imbalance is local, the foreground gradients can be eroded by their
neighborhoods. This process cumulatively increases the noise information
included in the gradient flow from top layers to the bottom ones, limiting the
learning of small structures in CNNs. To alleviate this problem, we use group
supervision and the corresponding WingsNet to provide complementary gradient
flows to enhance the training of shallow layers. To further address the
intra-class imbalance between large and small airways, we design a General
Union loss function which obviates the impact of airway size by distance-based
weights and adaptively tunes the gradient ratio based on the learning process.
Extensive experiments on public datasets demonstrate that the proposed method
can predict the airway structures with higher accuracy and better morphological
completeness than the baselines.
Related papers
- Boundary-Emphasized Weight Maps for Distal Airway Segmentation [0.0]
We propose the Boundary-Emphasized Loss (BEL), which enhances boundary preservation using a boundary-based weight map and an adaptive weight refinement strategy.
evaluated on ATM22 and AIIB23, BEL outperforms baseline loss functions, achieving higher topology-related metrics and comparable overall-based measures.
arXiv Detail & Related papers (2025-02-28T23:11:13Z) - Multi-Stage Airway Segmentation in Lung CT Based on Multi-scale Nested Residual UNet [3.1903847117782274]
Deep learning has led to significant advancements in medical image segmentation, but maintaining airway continuity remains challenging.
This paper introduces a nested residual framework to enhance information flow, effectively capturing the intricate details of small airways.
We develop a three-stage segmentation pipeline to optimize the training of the MNR-UNet.
arXiv Detail & Related papers (2024-10-24T06:10:09Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Interpolation-Split: a data-centric deep learning approach with big interpolated data to boost airway segmentation performance [6.015272528297327]
airway segmentation plays a critical role in the production of the outline of the entire airway tree.
In this study, we propose a data-centric deep learning technique to segment the airway tree.
arXiv Detail & Related papers (2023-07-29T14:51:56Z) - Differentiable Topology-Preserved Distance Transform for Pulmonary
Airway Segmentation [34.22415353209505]
We propose a Differentiable Topology-Preserved Distance Transform (DTPDT) framework to improve the performance of airway segmentation.
A Topology-Preserved Surrogate (TPS) learning strategy is first proposed to balance the training progress within-class distribution.
A Convolutional Distance Transform (CDT) is designed to identify the breakage phenomenon with superior sensitivity and minimize the variation of the distance map between the predictionand ground-truth.
arXiv Detail & Related papers (2022-09-17T15:47:01Z) - Fuzzy Attention Neural Network to Tackle Discontinuity in Airway
Segmentation [67.19443246236048]
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases.
Some small-sized airway branches (e.g., bronchus and terminaloles) significantly aggravate the difficulty of automatic segmentation.
This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function.
arXiv Detail & Related papers (2022-09-05T16:38:13Z) - Learning Tubule-Sensitive CNNs for Pulmonary Airway and Artery-Vein
Segmentation in CT [45.93021999366973]
Training convolutional neural networks (CNNs) for segmentation of pulmonary airway, artery, and vein is challenging.
We present a CNNs-based method for accurate airway and artery-vein segmentation in non-contrast computed tomography.
It enjoys superior sensitivity to tenuous peripheral bronchioles, arterioles, and venules.
arXiv Detail & Related papers (2020-12-10T15:56:08Z) - A Study of Gradient Variance in Deep Learning [56.437755740715396]
We introduce a method, Gradient Clustering, to minimize the variance of average mini-batch gradient with stratified sampling.
We measure the gradient variance on common deep learning benchmarks and observe that, contrary to common assumptions, gradient variance increases during training.
arXiv Detail & Related papers (2020-07-09T03:23:10Z) - Unbiased Risk Estimators Can Mislead: A Case Study of Learning with
Complementary Labels [92.98756432746482]
We study a weakly supervised problem called learning with complementary labels.
We show that the quality of gradient estimation matters more in risk minimization.
We propose a novel surrogate complementary loss(SCL) framework that trades zero bias with reduced variance.
arXiv Detail & Related papers (2020-07-05T04:19:37Z) - The Break-Even Point on Optimization Trajectories of Deep Neural
Networks [64.7563588124004]
We argue for the existence of the "break-even" point on this trajectory.
We show that using a large learning rate in the initial phase of training reduces the variance of the gradient.
We also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers.
arXiv Detail & Related papers (2020-02-21T22:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.