Optimized Hybrid Focal Margin Loss for Crack Segmentation
- URL: http://arxiv.org/abs/2302.04395v1
- Date: Thu, 9 Feb 2023 01:26:38 GMT
- Title: Optimized Hybrid Focal Margin Loss for Crack Segmentation
- Authors: Jiajie Chen
- Abstract summary: We propose a novel hybrid focal loss to handle extreme class imbalance and prevent overfitting for crack segmentation.
Our experiments demonstrate that the focal margin component can significantly increase the IoU of cracks by 0.43 on DeepCrack-DB and 0.44 on our PanelCrack dataset.
- Score: 1.8492669447784602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many loss functions have been derived from cross-entropy loss functions such
as large-margin softmax loss and focal loss. The large-margin softmax loss
makes the classification more rigorous and prevents overfitting. The focal loss
alleviates class imbalance in object detection by down-weighting the loss of
well-classified examples. Recent research has shown that these two loss
functions derived from cross entropy have valuable applications in the field of
image segmentation. However, to the best of our knowledge, there is no unified
formulation that combines these two loss functions so that they can not only be
transformed mutually, but can also be used to simultaneously address class
imbalance and overfitting. To this end, we subdivide the entropy-based loss
into the regularizer-based entropy loss and the focal-based entropy loss, and
propose a novel optimized hybrid focal loss to handle extreme class imbalance
and prevent overfitting for crack segmentation. We have evaluated our proposal
in comparison with three crack segmentation datasets (DeepCrack-DB, CRACK500
and our private PanelCrack dataset). Our experiments demonstrate that the focal
margin component can significantly increase the IoU of cracks by 0.43 on
DeepCrack-DB and 0.44 on our PanelCrack dataset, respectively.
Related papers
- EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification [1.3778851745408134]
We propose a novel ensemble method, namely EnsLoss, to combine loss functions within the Empirical risk minimization framework.
We first transform the CC conditions of losses into loss-derivatives, thereby bypassing the need for explicit loss functions.
We theoretically establish the statistical consistency of our approach and provide insights into its benefits.
arXiv Detail & Related papers (2024-09-02T02:40:42Z) - LEARN: An Invex Loss for Outlier Oblivious Robust Online Optimization [56.67706781191521]
An adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown to the learner.
We present a robust online rounds optimization framework, where an adversary can introduce outliers by corrupting loss functions in an arbitrary number of k, unknown.
arXiv Detail & Related papers (2024-08-12T17:08:31Z) - Byzantine-resilient Federated Learning With Adaptivity to Data Heterogeneity [54.145730036889496]
This paper deals with Gradient learning (FL) in the presence of malicious attacks Byzantine data.
A novel Average Algorithm (RAGA) is proposed, which leverages robustness aggregation and can select a dataset.
arXiv Detail & Related papers (2024-03-20T08:15:08Z) - Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic
Choices [37.559461866831754]
We benchmark a variety of loss functions with different algorithmic choices for deep AUROC optimization problem.
We highlight the essential choices such as positive sampling rate, regularization, normalization/activation, and weights.
Our findings show that although Adam-type method is more competitive from training perspective, but it does not outperform others from testing perspective.
arXiv Detail & Related papers (2022-03-27T00:47:00Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Label Distributionally Robust Losses for Multi-class Classification:
Consistency, Robustness and Adaptivity [55.29408396918968]
We study a family of loss functions named label-distributionally robust (LDR) losses for multi-class classification.
Our contributions include both consistency and robustness by establishing top-$k$ consistency of LDR losses for multi-class classification.
We propose a new adaptive LDR loss that automatically adapts the individualized temperature parameter to the noise degree of class label of each instance.
arXiv Detail & Related papers (2021-12-30T00:27:30Z) - InverseForm: A Loss Function for Structured Boundary-Aware Segmentation [80.39674800972182]
We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network.
This plug-in loss term complements the cross-entropy loss in capturing boundary transformations.
We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks.
arXiv Detail & Related papers (2021-04-06T18:52:45Z) - A Mixed Focal Loss Function for Handling Class Imbalanced Medical Image
Segmentation [0.7619404259039283]
We propose a new compound loss function derived from modified variants of the Focal Focal loss and Dice loss functions.
Our proposed loss function is associated with a better recall-precision balance, significantly outperforming the other loss functions in both binary and multi-class image segmentation.
arXiv Detail & Related papers (2021-02-08T20:47:38Z) - Focal and Efficient IOU Loss for Accurate Bounding Box Regression [63.14659624634066]
In object detection, bounding box regression (BBR) is a crucial step that determines the object localization performance.
Most previous loss functions for BBR have two main drawbacks: (i) Both $ell_n$-norm and IOU-based loss functions are inefficient to depict the objective of BBR, which leads to slow convergence and inaccurate regression results.
arXiv Detail & Related papers (2021-01-20T14:33:58Z) - Offset Curves Loss for Imbalanced Problem in Medical Segmentation [15.663236378920637]
We develop a new deep learning-based model which takes into account both higher feature level i.e. region inside contour and lower feature level i.e. contour.
Our proposed Offset Curves (OsC) loss consists of three main fitting terms.
We evaluate our proposed OsC loss on both 2D network and 3D network.
arXiv Detail & Related papers (2020-12-04T08:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.