AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in
Image Classification
- URL: http://arxiv.org/abs/2004.09805v1
- Date: Tue, 21 Apr 2020 08:03:14 GMT
- Title: AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in
Image Classification
- Authors: Hongjun Choi, Anirudh Som and Pavan Turaga
- Abstract summary: Angular Margin Contrastive Loss (AMC-Loss) is a new loss function to be used along with the traditional cross-entropy loss.
AMC-Loss employs the discriminative angular distance metric that is equivalent to geodesic distance on a hypersphere manifold.
We find that although the proposed geometrically constrained loss-function improves quantitative results modestly, it has a qualitatively surprisingly beneficial effect on increasing the interpretability of deep-net decisions.
- Score: 8.756814963313804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep-learning architectures for classification problems involve the
cross-entropy loss sometimes assisted with auxiliary loss functions like center
loss, contrastive loss and triplet loss. These auxiliary loss functions
facilitate better discrimination between the different classes of interest.
However, recent studies hint at the fact that these loss functions do not take
into account the intrinsic angular distribution exhibited by the low-level and
high-level feature representations. This results in less compactness between
samples from the same class and unclear boundary separations between data
clusters of different classes. In this paper, we address this issue by
proposing the use of geometric constraints, rooted in Riemannian geometry.
Specifically, we propose Angular Margin Contrastive Loss (AMC-Loss), a new loss
function to be used along with the traditional cross-entropy loss. The AMC-Loss
employs the discriminative angular distance metric that is equivalent to
geodesic distance on a hypersphere manifold such that it can serve a clear
geometric interpretation. We demonstrate the effectiveness of AMC-Loss by
providing quantitative and qualitative results. We find that although the
proposed geometrically constrained loss-function improves quantitative results
modestly, it has a qualitatively surprisingly beneficial effect on increasing
the interpretability of deep-net decisions as seen by the visual explanations
generated by techniques such as the Grad-CAM. Our code is available at
https://github.com/hchoi71/AMC-Loss.
Related papers
- EnsLoss: Stochastic Calibrated Loss Ensembles for Preventing Overfitting in Classification [1.3778851745408134]
We propose a novel ensemble method, namely EnsLoss, to combine loss functions within the Empirical risk minimization framework.
We first transform the CC conditions of losses into loss-derivatives, thereby bypassing the need for explicit loss functions.
We theoretically establish the statistical consistency of our approach and provide insights into its benefits.
arXiv Detail & Related papers (2024-09-02T02:40:42Z) - Large Margin Discriminative Loss for Classification [3.3975558777609915]
We introduce a novel discriminative loss function with large margin in the context of Deep Learning.
This loss boosts the discriminative power of neural nets, represented by intra-class compactness and inter-class separability.
arXiv Detail & Related papers (2024-05-28T18:10:45Z) - Disentanglement Learning via Topology [22.33086299021419]
We propose TopDis, a method for learning disentangled representations via adding a multi-scale topological loss term.
Disentanglement is a crucial property of data representations substantial for the explainability and robustness of deep learning models.
We show how to use the proposed topological loss to find disentangled directions in a trained GAN.
arXiv Detail & Related papers (2023-08-24T10:29:25Z) - Do Lessons from Metric Learning Generalize to Image-Caption Retrieval? [67.45267657995748]
The triplet loss with semi-hard negatives has become the de facto choice for image-caption retrieval (ICR) methods that are optimized from scratch.
Recent progress in metric learning has given rise to new loss functions that outperform the triplet loss on tasks such as image retrieval and representation learning.
We ask whether these findings generalize to the setting of ICR by comparing three loss functions on two ICR methods.
arXiv Detail & Related papers (2022-02-14T15:18:00Z) - The KFIoU Loss for Rotated Object Detection [115.334070064346]
In this paper, we argue that one effective alternative is to devise an approximate loss who can achieve trend-level alignment with SkewIoU loss.
Specifically, we model the objects as Gaussian distribution and adopt Kalman filter to inherently mimic the mechanism of SkewIoU.
The resulting new loss called KFIoU is easier to implement and works better compared with exact SkewIoU.
arXiv Detail & Related papers (2022-01-29T10:54:57Z) - Understanding Square Loss in Training Overparametrized Neural Network
Classifiers [31.319145959402462]
We contribute to the theoretical understanding of square loss in classification by systematically investigating how it performs for overparametrized neural networks.
We consider two cases, according to whether classes are separable or not. In the general non-separable case, fast convergence rate is established for both misclassification rate and calibration error.
The resulting margin is proven to be lower bounded away from zero, providing theoretical guarantees for robustness.
arXiv Detail & Related papers (2021-12-07T12:12:30Z) - Orthogonal Projection Loss [59.61277381836491]
We develop a novel loss function termed Orthogonal Projection Loss' (OPL)
OPL directly enforces inter-class separation alongside intra-class clustering in the feature space.
OPL offers unique advantages as it does not require careful negative mining and is not sensitive to the batch size.
arXiv Detail & Related papers (2021-03-25T17:58:00Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation [56.343646789922545]
We propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric.
Experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently.
arXiv Detail & Related papers (2020-10-15T17:59:08Z) - $\sigma^2$R Loss: a Weighted Loss by Multiplicative Factors using
Sigmoidal Functions [0.9569316316728905]
We introduce a new loss function called squared reduction loss ($sigma2$R loss), which is regulated by a sigmoid function to inflate/deflate the error per instance.
Our loss has clear intuition and geometric interpretation, we demonstrate by experiments the effectiveness of our proposal.
arXiv Detail & Related papers (2020-09-18T12:34:40Z) - Enhancing Geometric Factors in Model Learning and Inference for Object
Detection and Instance Segmentation [91.12575065731883]
We propose Complete-IoU (CIoU) loss and Cluster-NMS for enhancing geometric factors in both bounding box regression and Non-Maximum Suppression (NMS)
The training of deep models using CIoU loss results in consistent AP and AR improvements in comparison to widely adopted $ell_n$-norm loss and IoU-based loss.
Cluster-NMS is very efficient due to its pure GPU implementation, and geometric factors can be incorporated to improve both AP and AR.
arXiv Detail & Related papers (2020-05-07T16:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.