Striking the Right Balance: Recall Loss for Semantic Segmentation
- URL: http://arxiv.org/abs/2106.14917v1
- Date: Mon, 28 Jun 2021 18:02:03 GMT
- Title: Striking the Right Balance: Recall Loss for Semantic Segmentation
- Authors: Junjiao Tian, Niluthpol Mithun, Zach Seymour, Han-Pang Chiu, Zsolt
Kira
- Abstract summary: Class imbalance is a fundamental problem in computer vision applications such as semantic segmentation.
We propose a hard-class mining loss by reshaping the vanilla cross entropy loss.
We show that the novel recall loss changes gradually between the standard cross entropy loss and the inverse frequency weighted loss.
- Score: 24.047359482606307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class imbalance is a fundamental problem in computer vision applications such
as semantic segmentation. Specifically, uneven class distributions in a
training dataset often result in unsatisfactory performance on
under-represented classes. Many works have proposed to weight the standard
cross entropy loss function with pre-computed weights based on class
statistics, such as the number of samples and class margins. There are two
major drawbacks to these methods: 1) constantly up-weighting minority classes
can introduce excessive false positives in semantic segmentation; 2) a minority
class is not necessarily a hard class. The consequence is low precision due to
excessive false positives. In this regard, we propose a hard-class mining loss
by reshaping the vanilla cross entropy loss such that it weights the loss for
each class dynamically based on instantaneous recall performance. We show that
the novel recall loss changes gradually between the standard cross entropy loss
and the inverse frequency weighted loss. Recall loss also leads to improved
mean accuracy while offering competitive mean Intersection over Union (IoU)
performance. On Synthia dataset, recall loss achieves 9% relative improvement
on mean accuracy with competitive mean IoU using DeepLab-ResNet18 compared to
the cross entropy loss. Code available at
https://github.com/PotatoTian/recall-semseg.
Related papers
- Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Class Instance Balanced Learning for Long-Tailed Classification [0.0]
Long-tailed image classification task deals with large imbalances in the class frequencies of the training data.
Previous approaches have shown that combining cross-entropy and contrastive learning can improve performance on the long-tailed task.
We propose a novel class instance balanced loss (CIBL), which reweights the relative contributions of a cross-entropy and a contrastive loss as a function of the frequency of class instances in the training batch.
arXiv Detail & Related papers (2023-07-11T15:09:10Z) - Escaping Saddle Points for Effective Generalization on Class-Imbalanced
Data [40.64419633190104]
We analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques.
We find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes.
Using SAM results in a 6.2% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4% across imbalanced datasets.
arXiv Detail & Related papers (2022-12-28T14:00:44Z) - Neural Collapse Inspired Attraction-Repulsion-Balanced Loss for
Imbalanced Learning [97.81549071978789]
We propose Attraction-Repulsion-Balanced Loss (ARB-Loss) to balance the different components of the gradients.
We perform experiments on the large-scale classification and segmentation datasets and our ARB-Loss can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-04-19T08:23:23Z) - Stochastic smoothing of the top-K calibrated hinge loss for deep
imbalanced classification [8.189630642296416]
We introduce a top-K hinge loss inspired by recent developments on top-K losses.
Our proposal is based on the smoothing of the top-K operator building on the flexible "perturbed" framework.
We show that our loss function performs very well in the case of balanced datasets, while benefiting from a significantly lower computational time.
arXiv Detail & Related papers (2022-02-04T15:39:32Z) - You Only Need End-to-End Training for Long-Tailed Recognition [8.789819609485225]
Cross-entropy loss tends to produce highly correlated features on imbalanced data.
We propose two novel modules, Block-based Relatively Balanced Batch Sampler (B3RS) and Batch Embedded Training (BET)
Experimental results on the long-tailed classification benchmarks, CIFAR-LT and ImageNet-LT, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-12-11T11:44:09Z) - Mixing between the Cross Entropy and the Expectation Loss Terms [89.30385901335323]
Cross entropy loss tends to focus on hard to classify samples during training.
We show that adding to the optimization goal the expectation loss helps the network to achieve better accuracy.
Our experiments show that the new training protocol improves performance across a diverse set of classification domains.
arXiv Detail & Related papers (2021-09-12T23:14:06Z) - Learning with Noisy Labels via Sparse Regularization [76.31104997491695]
Learning with noisy labels is an important task for training accurate deep neural networks.
Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels.
We introduce the sparse regularization strategy to approximate the one-hot constraint.
arXiv Detail & Related papers (2021-07-31T09:40:23Z) - Identifying and Compensating for Feature Deviation in Imbalanced Deep
Learning [59.65752299209042]
We investigate learning a ConvNet under such a scenario.
We found that a ConvNet significantly over-fits the minor classes.
We propose to incorporate class-dependent temperatures (CDT) training ConvNet.
arXiv Detail & Related papers (2020-01-06T03:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.