RISAN: Robust Instance Specific Abstention Network
- URL: http://arxiv.org/abs/2107.03090v1
- Date: Wed, 7 Jul 2021 09:14:54 GMT
- Title: RISAN: Robust Instance Specific Abstention Network
- Authors: Bhavya Kalra, Kulin Shah and Naresh Manwani
- Abstract summary: We propose deep architectures for learning instance abstain (reject option) binary classifiers.
The proposed approach uses double sigmoid loss function as described by Kulin Shah and Naresh Manwani.
We observe that the proposed approach not only performs comparable to the state-of-the-art approaches, it is also robust against label noise.
- Score: 13.303728978965072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose deep architectures for learning instance specific
abstain (reject option) binary classifiers. The proposed approach uses double
sigmoid loss function as described by Kulin Shah and Naresh Manwani in ("Online
Active Learning of Reject Option Classifiers", AAAI, 2020), as a performance
measure. We show that the double sigmoid loss is classification calibrated. We
also show that the excess risk of 0-d-1 loss is upper bounded by the excess
risk of double sigmoid loss. We derive the generalization error bounds for the
proposed architecture for reject option classifiers. To show the effectiveness
of the proposed approach, we experiment with several real world datasets. We
observe that the proposed approach not only performs comparable to the
state-of-the-art approaches, it is also robust against label noise. We also
provide visualizations to observe the important features learned by the network
corresponding to the abstaining decision.
Related papers
- Graph Anomaly Detection with Noisy Labels by Reinforcement Learning [13.135788402192215]
We propose a novel framework REGAD, i.e., REinforced Graph Anomaly Detector.
Specifically, we aim to maximize the performance improvement (AUC) of a base detector by cutting noisy edges approximated through the nodes with high-confidence labels.
arXiv Detail & Related papers (2024-07-08T13:41:21Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Enhancing Robust Representation in Adversarial Training: Alignment and
Exclusion Criteria [61.048842737581865]
We show that Adversarial Training (AT) omits to learning robust features, resulting in poor performance of adversarial robustness.
We propose a generic framework of AT to gain robust representation, by the asymmetric negative contrast and reverse attention.
Empirical evaluations on three benchmark datasets show our methods greatly advance the robustness of AT and achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-10-05T07:29:29Z) - Deep Metric Learning with Soft Orthogonal Proxies [1.823505080809275]
We propose a novel approach that introduces Soft Orthogonality (SO) constraint on proxies.
Our approach leverages Data-Efficient Image Transformer (DeiT) as an encoder to extract contextual features from images along with a DML objective.
Our evaluations demonstrate the superiority of our proposed approach over state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2023-06-22T17:22:15Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Taming Adversarial Robustness via Abstaining [7.1975923901054575]
We consider a binary classification problem where the observations can be perturbed by an adversary.
We include an abstaining option, where the classifier abstains from taking a decision when it has low confidence about the prediction.
We show that there exist a tradeoff between the two metrics regardless of what method is used to choose the abstaining region.
arXiv Detail & Related papers (2021-04-06T07:36:48Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Achieving robustness in classification using optimal transport with
hinge regularization [7.780418853571034]
We propose a new framework for binary classification, based on optimal transport.
We learn 1-Lipschitz networks using a new loss that is an hinge regularized version of the Kantorovich-Rubinstein dual formulation for the Wasserstein distance estimation.
arXiv Detail & Related papers (2020-06-11T15:36:23Z) - Adaptive Double-Exploration Tradeoff for Outlier Detection [31.428683644520046]
We study a variant of the thresholding bandit problem (TBP) in the context of outlier detection.
The objective is to identify the outliers whose rewards are above a threshold.
By automatically trading off exploring the individual arms and exploring the outlier threshold, we provide an efficient algorithm.
arXiv Detail & Related papers (2020-05-13T00:12:31Z) - Proposal Learning for Semi-Supervised Object Detection [76.83284279733722]
It is non-trivial to train object detectors on unlabeled data due to the unavailability of ground truth labels.
We present a proposal learning approach to learn proposal features and predictions from both labeled and unlabeled data.
arXiv Detail & Related papers (2020-01-15T00:06:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.