CC-Loss: Channel Correlation Loss For Image Classification
- URL: http://arxiv.org/abs/2010.05469v1
- Date: Mon, 12 Oct 2020 05:59:06 GMT
- Title: CC-Loss: Channel Correlation Loss For Image Classification
- Authors: Zeyu Song, Dongliang Chang, Zhanyu Ma, Xiaoxu Li, Zheng-Hua Tan
- Abstract summary: The channel correlation loss (CC-Loss) is able to constrain the specific relations between classes and channels.
Two different backbone models trained with the proposed CC-Loss outperform the state-of-the-art loss functions on three image classification datasets.
- Score: 35.43152123975516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The loss function is a key component in deep learning models. A commonly used
loss function for classification is the cross entropy loss, which is a simple
yet effective application of information theory for classification problems.
Based on this loss, many other loss functions have been proposed,~\emph{e.g.},
by adding intra-class and inter-class constraints to enhance the discriminative
ability of the learned features. However, these loss functions fail to consider
the connections between the feature distribution and the model structure.
Aiming at addressing this problem, we propose a channel correlation loss
(CC-Loss) that is able to constrain the specific relations between classes and
channels as well as maintain the intra-class and the inter-class separability.
CC-Loss uses a channel attention module to generate channel attention of
features for each sample in the training stage. Next, an Euclidean distance
matrix is calculated to make the channel attention vectors associated with the
same class become identical and to increase the difference between different
classes. Finally, we obtain a feature embedding with good intra-class
compactness and inter-class separability.Experimental results show that two
different backbone models trained with the proposed CC-Loss outperform the
state-of-the-art loss functions on three image classification datasets.
Related papers
- Minimizing Chebyshev Prototype Risk Magically Mitigates the Perils of Overfitting [1.6574413179773757]
We develop multicomponent loss functions that reduce intra-class feature correlation and maximize inter-class feature distance.
We implement the terms of the Chebyshev Prototype Risk (CPR) bound into our Explicit CPR loss function.
Our training algorithm reduces overfitting and improves upon previous approaches in many settings.
arXiv Detail & Related papers (2024-04-10T15:16:04Z) - Class-Agnostic Segmentation Loss and Its Application to Salient Object
Detection and Segmentation [17.149364927872014]
We present a novel loss function, called class-agnostic segmentation (CAS) loss.
We show that the CAS loss function is sparse, bounded, and robust to class-imbalance.
arXiv Detail & Related papers (2021-07-16T12:26:31Z) - Channel DropBlock: An Improved Regularization Method for Fine-Grained
Visual Classification [58.07257910065007]
Existing approaches mainly tackle this problem by introducing attention mechanisms to locate the discriminative parts or feature encoding approaches to extract the highly parameterized features in a weakly-supervised fashion.
In this work, we propose a lightweight yet effective regularization method named Channel DropBlock (CDB) in combination with two alternative correlation metrics, to address this problem.
arXiv Detail & Related papers (2021-06-07T09:03:02Z) - Orthogonal Projection Loss [59.61277381836491]
We develop a novel loss function termed Orthogonal Projection Loss' (OPL)
OPL directly enforces inter-class separation alongside intra-class clustering in the feature space.
OPL offers unique advantages as it does not require careful negative mining and is not sensitive to the batch size.
arXiv Detail & Related papers (2021-03-25T17:58:00Z) - Class-Agnostic Segmentation Loss and Its Application to Salient Object
Detection and Segmentation [17.532822703595766]
We present a novel loss function, called class-agnostic segmentation (CAS) loss.
We show that the CAS loss function is sparse, bounded, and robust to class-imbalance.
We investigate the performance against the state-of-the-art methods in two settings of low and high-fidelity training data.
arXiv Detail & Related papers (2020-10-28T07:11:15Z) - $\sigma^2$R Loss: a Weighted Loss by Multiplicative Factors using
Sigmoidal Functions [0.9569316316728905]
We introduce a new loss function called squared reduction loss ($sigma2$R loss), which is regulated by a sigmoid function to inflate/deflate the error per instance.
Our loss has clear intuition and geometric interpretation, we demonstrate by experiments the effectiveness of our proposal.
arXiv Detail & Related papers (2020-09-18T12:34:40Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image
Classification [67.79883226015824]
Key for solving fine-grained image categorization is finding discriminate and local regions that correspond to subtle visual traits.
In this paper, we show it is possible to cultivate subtle details without the need for overly complicated network designs or training mechanisms.
The proposed loss function, termed as mutual-channel loss (MC-Loss), consists of two channel-specific components.
arXiv Detail & Related papers (2020-02-11T09:12:45Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.