Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation
- URL: http://arxiv.org/abs/2010.07930v2
- Date: Thu, 3 Dec 2020 05:05:15 GMT
- Title: Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation
- Authors: Hao Li, Chenxin Tao, Xizhou Zhu, Xiaogang Wang, Gao Huang, Jifeng Dai
- Abstract summary: We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
- Score: 56.343646789922545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing proper loss functions is essential in training deep networks.
Especially in the field of semantic segmentation, various evaluation metrics
have been proposed for diverse scenarios. Despite the success of the widely
adopted cross-entropy loss and its variants, the mis-alignment between the loss
functions and evaluation metrics degrades the network performance. Meanwhile,
manually designing loss functions for each specific metric requires expertise
and significant manpower. In this paper, we propose to automate the design of
metric-specific loss functions by searching differentiable surrogate losses for
each metric. We substitute the non-differentiable operations in the metrics
with parameterized functions, and conduct parameter search to optimize the
shape of loss surfaces. Two constraints are introduced to regularize the search
space and make the search efficient. Extensive experiments on PASCAL VOC and
Cityscapes demonstrate that the searched surrogate losses outperform the
manually designed loss functions consistently. The searched losses can
generalize well to other datasets and networks. Code shall be released.
Related papers
- AnyLoss: Transforming Classification Metrics into Loss Functions [21.34290540936501]
evaluation metrics can be used to assess the performance of models in binary classification tasks.
Most metrics are derived from a confusion matrix in a non-differentiable form, making it difficult to generate a differentiable loss function that could directly optimize them.
We propose a general-purpose approach that transforms any confusion matrix-based metric into a loss function, textitAnyLoss, that is available in optimization processes.
arXiv Detail & Related papers (2024-05-23T16:14:16Z) - A survey and taxonomy of loss functions in machine learning [60.41650195728953]
Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions.
This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.
arXiv Detail & Related papers (2023-01-13T14:38:24Z) - Searching Parameterized AP Loss for Object Detection [36.3603004789312]
Loss functions play an important role in training deep-network-based object detectors.
Due to the non-differentiable nature of the AP metric, traditional object detectors adopt separate differentiable losses for the two sub-tasks.
We propose parameterized AP Loss, where parameterized functions are introduced to substitute the non-differentiable components in the AP calculation.
arXiv Detail & Related papers (2021-12-09T18:59:54Z) - AutoLoss: Automated Loss Function Search in Recommendations [34.27873944762912]
We propose an AutoLoss framework that can automatically and adaptively search for the appropriate loss function from a set of candidates.
Unlike existing algorithms, the proposed controller can adaptively generate the loss probabilities for different data examples according to their varied convergence behaviors.
arXiv Detail & Related papers (2021-06-12T08:15:00Z) - InverseForm: A Loss Function for Structured Boundary-Aware Segmentation [80.39674800972182]
We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network.
This plug-in loss term complements the cross-entropy loss in capturing boundary transformations.
We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks.
arXiv Detail & Related papers (2021-04-06T18:52:45Z) - AutoLoss-Zero: Searching Loss Functions from Scratch for Generic Tasks [78.27036391638802]
AutoLoss-Zero is the first framework for searching loss functions from scratch for generic tasks.
A loss-rejection protocol and a gradient-equivalence-check strategy are developed so as to improve the search efficiency.
Experiments on various computer vision tasks demonstrate that our searched loss functions are on par with or superior to existing loss functions.
arXiv Detail & Related papers (2021-03-25T17:59:09Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - A Unified Framework of Surrogate Loss by Refactoring and Interpolation [65.60014616444623]
We introduce UniLoss, a unified framework to generate surrogate losses for training deep networks with gradient descent.
We validate the effectiveness of UniLoss on three tasks and four datasets.
arXiv Detail & Related papers (2020-07-27T21:16:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.