Class-Agnostic Segmentation Loss and Its Application to Salient Object
Detection and Segmentation
- URL: http://arxiv.org/abs/2108.04226v1
- Date: Fri, 16 Jul 2021 12:26:31 GMT
- Title: Class-Agnostic Segmentation Loss and Its Application to Salient Object
Detection and Segmentation
- Authors: Angira Sharma, Naeemullah Khan, Muhammad Mubashar, Ganesh
Sundaramoorthi, Philip Torr
- Abstract summary: We present a novel loss function, called class-agnostic segmentation (CAS) loss.
We show that the CAS loss function is sparse, bounded, and robust to class-imbalance.
- Score: 17.149364927872014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present a novel loss function, called class-agnostic
segmentation (CAS) loss. With CAS loss the class descriptors are learned during
training of the network. We don't require to define the label of a class
a-priori, rather the CAS loss clusters regions with similar appearance together
in a weakly-supervised manner. Furthermore, we show that the CAS loss function
is sparse, bounded, and robust to class-imbalance. We first apply our CAS loss
function with fully-convolutional ResNet101 and DeepLab-v3 architectures to the
binary segmentation problem of salient object detection. We investigate the
performance against the state-of-the-art methods in two settings of low and
high-fidelity training data on seven salient object detection datasets. For
low-fidelity training data (incorrect class label) class-agnostic segmentation
loss outperforms the state-of-the-art methods on salient object detection
datasets by staggering margins of around 50%. For high-fidelity training data
(correct class labels) class-agnostic segmentation models perform as good as
the state-of-the-art approaches while beating the state-of-the-art methods on
most datasets. In order to show the utility of the loss function across
different domains we then also test on general segmentation dataset, where
class-agnostic segmentation loss outperforms competing losses by huge margins.
Related papers
- SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - CAFS: Class Adaptive Framework for Semi-Supervised Semantic Segmentation [5.484296906525601]
Semi-supervised semantic segmentation learns a model for classifying pixels into specific classes using a few labeled samples and numerous unlabeled images.
We propose a class-adaptive semisupervision framework for semi-supervised semantic segmentation (CAFS)
CAFS constructs a validation set on a labeled dataset, to leverage the calibration performance for each class.
arXiv Detail & Related papers (2023-03-21T05:56:53Z) - Long-tail Detection with Effective Class-Margins [4.18804572788063]
We show how the commonly used mean average precision evaluation metric on an unknown test set is bound by a margin-based binary classification error.
We optimize margin-based binary classification error with a novel surrogate objective called text-Effective Class-Margin Loss (ECM)
arXiv Detail & Related papers (2023-01-23T21:25:24Z) - Spacing Loss for Discovering Novel Categories [72.52222295216062]
Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data.
We first characterize existing NCD approaches into single-stage and two-stage methods based on whether they require access to labeled and unlabeled data together.
We devise a simple yet powerful loss function that enforces separability in the latent space using cues from multi-dimensional scaling.
arXiv Detail & Related papers (2022-04-22T09:37:11Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Class-Agnostic Segmentation Loss and Its Application to Salient Object
Detection and Segmentation [17.532822703595766]
We present a novel loss function, called class-agnostic segmentation (CAS) loss.
We show that the CAS loss function is sparse, bounded, and robust to class-imbalance.
We investigate the performance against the state-of-the-art methods in two settings of low and high-fidelity training data.
arXiv Detail & Related papers (2020-10-28T07:11:15Z) - CC-Loss: Channel Correlation Loss For Image Classification [35.43152123975516]
The channel correlation loss (CC-Loss) is able to constrain the specific relations between classes and channels.
Two different backbone models trained with the proposed CC-Loss outperform the state-of-the-art loss functions on three image classification datasets.
arXiv Detail & Related papers (2020-10-12T05:59:06Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z) - Identifying and Compensating for Feature Deviation in Imbalanced Deep
Learning [59.65752299209042]
We investigate learning a ConvNet under such a scenario.
We found that a ConvNet significantly over-fits the minor classes.
We propose to incorporate class-dependent temperatures (CDT) training ConvNet.
arXiv Detail & Related papers (2020-01-06T03:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.