On Mathews Correlation Coefficient and Improved Distance Map Loss for
Automatic Glacier Calving Front Segmentation in SAR Imagery
- URL: http://arxiv.org/abs/2102.08312v1
- Date: Tue, 16 Feb 2021 17:53:34 GMT
- Title: On Mathews Correlation Coefficient and Improved Distance Map Loss for
Automatic Glacier Calving Front Segmentation in SAR Imagery
- Authors: Amirabbas Davari, Saahil Islam, Thorsten Seehaus, Matthias Braun,
Andreas Maier, Vincent Christlein
- Abstract summary: The vast majority of outlet glaciers and ice streams of the polar ice sheets end in the ocean.
Ice mass loss via calving of the glaciers into the ocean has increased over the last few decades.
Deep neural network-based semantic segmentation pipelines can be used to delineate the acquired SAR imagery.
- Score: 7.64750171496838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The vast majority of the outlet glaciers and ice streams of the polar ice
sheets end in the ocean. Ice mass loss via calving of the glaciers into the
ocean has increased over the last few decades. Information on the temporal
variability of the calving front position provides fundamental information on
the state of the glacier and ice stream, which can be exploited as calibration
and validation data to enhance ice dynamics modeling. To identify the calving
front position automatically, deep neural network-based semantic segmentation
pipelines can be used to delineate the acquired SAR imagery. However, the
extreme class imbalance is highly challenging for the accurate calving front
segmentation in these images. Therefore, we propose the use of the Mathews
correlation coefficient (MCC) as an early stopping criterion because of its
symmetrical properties and its invariance towards class imbalance. Moreover, we
propose an improvement to the distance map-based binary cross-entropy (BCE)
loss function. The distance map adds context to the loss function about the
important regions for segmentation and helps accounting for the imbalanced
data. Using Mathews correlation coefficient as early stopping demonstrates an
average 15% dice coefficient improvement compared to the commonly used BCE. The
modified distance map loss further improves the segmentation performance by
another 2%. These results are encouraging as they support the effectiveness of
the proposed methods for segmentation problems suffering from extreme class
imbalances.
Related papers
- Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective [66.80315289020487]
Warmup-Stable-Decay (WSD) schedule uses a constant learning rate to produce a main branch of iterates that can continue indefinitely without a pre-specified compute budget.
We show that pretraining loss exhibits a river valley landscape, which resembles a deep valley with a river at its bottom.
Inspired by the theory, we introduce WSD-S, a variant of WSD that reuses previous checkpoints' decay phases and keeps only one main branch.
arXiv Detail & Related papers (2024-10-07T16:49:39Z) - Comparison of Cross-Entropy, Dice, and Focal Loss for Sea Ice Type
Segmentation [1.4364491422470593]
We show how three loss functions affect the performance of CNN models trained to predict the dominant ice type in Sentinel-1 images.
Despite the fact that Dice and Focal loss produce higher metrics, results from cross-entropy seem generally more physically consistent.
arXiv Detail & Related papers (2023-10-26T04:18:00Z) - Learning Partial Correlation based Deep Visual Representation for Image
Classification [61.0532370259644]
We formulate sparse inverse covariance estimation (SICE) as a novel structured layer of CNN.
Our work obtains a partial correlation based deep visual representation and mitigates the small sample problem.
Experiments show the efficacy and superior classification performance of our model.
arXiv Detail & Related papers (2023-04-23T10:09:01Z) - Dataset Distillation with Convexified Implicit Gradients [69.16247946639233]
We show how implicit gradients can be effectively used to compute meta-gradient updates.
We further equip the algorithm with a convexified approximation that corresponds to learning on top of a frozen finite-width neural kernel.
arXiv Detail & Related papers (2023-02-13T23:53:16Z) - AMD-HookNet for Glacier Front Segmentation [17.60067480799222]
knowledge on changes in glacier calving front positions is important for assessing the status of glaciers.
Deep learning-based methods have shown great potential for glacier calving front delineation from optical and radar satellite imagery.
We propose Attention-Multi-hooking-Deep-supervision HookNet (AMD-HookNet), a novel glacier calving front segmentation framework.
arXiv Detail & Related papers (2023-02-06T12:39:40Z) - SGCN:Sparse Graph Convolution Network for Pedestrian Trajectory
Prediction [64.16212996247943]
We present a Sparse Graph Convolution Network(SGCN) for pedestrian trajectory prediction.
Specifically, the SGCN explicitly models the sparse directed interaction with a sparse directed spatial graph to capture adaptive interaction pedestrians.
visualizations indicate that our method can capture adaptive interactions between pedestrians and their effective motion tendencies.
arXiv Detail & Related papers (2021-04-04T03:17:42Z) - Pixel-wise Distance Regression for Glacier Calving Front Detection and
Segmentation [7.64750171496838]
Deep learning approaches have been investigated for monitoring the evolution and status of glaciers.
In this work, we propose to mitigate the class-imbalance between the calving front class and the non-calving front class by reformulating the segmentation problem into a pixel-wise regression task.
A Convolutional Neural Network gets optimized to predict the distance values to the glacier front for each pixel in the image.
arXiv Detail & Related papers (2021-03-09T20:58:33Z) - Bayesian U-Net for Segmenting Glaciers in SAR Imagery [7.960675807187592]
We propose to compute uncertainty and use it in an Uncertainty Optimization regime as a novel two-stage process.
We show that feeding the uncertainty map to the network leads to 95.24% Dice similarity.
This is an overall improvement in the segmentation performance compared to the state-of-the-art deterministic U-Net-based glacier segmentation pipelines.
arXiv Detail & Related papers (2021-01-08T23:17:49Z) - Glacier Calving Front Segmentation Using Attention U-Net [7.64750171496838]
We show a method to segment the glacier calving fronts from SAR images in an end-to-end fashion using Attention U-Net.
Adding attention modules to the state-of-the-art U-Net network lets us analyze the learning process by extracting its attention maps.
Our proposed attention U-Net performs comparably to the standard U-Net while providing additional insight into those regions on which the network learned to focus more.
arXiv Detail & Related papers (2021-01-08T23:06:21Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - The Break-Even Point on Optimization Trajectories of Deep Neural
Networks [64.7563588124004]
We argue for the existence of the "break-even" point on this trajectory.
We show that using a large learning rate in the initial phase of training reduces the variance of the gradient.
We also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers.
arXiv Detail & Related papers (2020-02-21T22:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.