Scalable Optimal Classifiers for Adversarial Settings under Uncertainty
- URL: http://arxiv.org/abs/2106.14702v1
- Date: Mon, 28 Jun 2021 13:33:53 GMT
- Title: Scalable Optimal Classifiers for Adversarial Settings under Uncertainty
- Authors: Patrick Loiseau and Benjamin Roussillon
- Abstract summary: We consider the problem of finding optimal classifiers in an adversarial setting where the class-1 data is generated by an attacker whose objective is not known to the defender.
We show that this low-dimensional characterization enables to develop a training method to compute provably approximately optimal classifiers in a scalable manner.
- Score: 10.90668635921398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of finding optimal classifiers in an adversarial
setting where the class-1 data is generated by an attacker whose objective is
not known to the defender -- an aspect that is key to realistic applications
but has so far been overlooked in the literature. To model this situation, we
propose a Bayesian game framework where the defender chooses a classifier with
no a priori restriction on the set of possible classifiers. The key difficulty
in the proposed framework is that the set of possible classifiers is
exponential in the set of possible data, which is itself exponential in the
number of features used for classification. To counter this, we first show that
Bayesian Nash equilibria can be characterized completely via functional
threshold classifiers with a small number of parameters. We then show that this
low-dimensional characterization enables to develop a training method to
compute provably approximately optimal classifiers in a scalable manner; and to
develop a learning algorithm for the online setting with low regret (both
independent of the dimension of the set of possible data). We illustrate our
results through simulations.
Related papers
- Generating collective counterfactual explanations in score-based
classification via mathematical optimization [4.281723404774889]
A counterfactual explanation of an instance indicates how this instance should be minimally modified so that the perturbed instance is classified in the desired class.
Most of the Counterfactual Analysis literature focuses on the single-instance single-counterfactual setting.
By means of novel Mathematical Optimization models, we provide a counterfactual explanation for each instance in a group of interest.
arXiv Detail & Related papers (2023-10-19T15:18:42Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - Characterizing the Optimal 0-1 Loss for Multi-class Classification with
a Test-time Attacker [57.49330031751386]
We find achievable information-theoretic lower bounds on loss in the presence of a test-time attacker for multi-class classifiers on any discrete dataset.
We provide a general framework for finding the optimal 0-1 loss that revolves around the construction of a conflict hypergraph from the data and adversarial constraints.
arXiv Detail & Related papers (2023-02-21T15:17:13Z) - Determination of class-specific variables in nonparametric
multiple-class classification [0.0]
We propose a probability-based nonparametric multiple-class classification method, and integrate it with the ability of identifying high impact variables for individual class.
We report the properties of the proposed method, and use both synthesized and real data sets to illustrate its properties under different classification situations.
arXiv Detail & Related papers (2022-05-07T10:08:58Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Binary Classification from Multiple Unlabeled Datasets via Surrogate Set
Classification [94.55805516167369]
We propose a new approach for binary classification from m U-sets for $mge2$.
Our key idea is to consider an auxiliary classification task called surrogate set classification (SSC)
arXiv Detail & Related papers (2021-02-01T07:36:38Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Inverse Classification with Limited Budget and Maximum Number of
Perturbed Samples [18.76745359031975]
Inverse classification is a post modeling process to find changes in input features of samples to alter the initially predicted class.
In this study, we propose a new framework to solve inverse classification that maximizes the number of perturbed samples.
We design algorithms to solve this problem based on gradient methods, processes, Lagrangian relaxations, and the Gumbel trick.
arXiv Detail & Related papers (2020-09-29T15:52:10Z) - The Role of Randomness and Noise in Strategic Classification [7.972516140165492]
We investigate the problem of designing optimal classifiers in the strategic classification setting.
We show that in many natural cases, the imposed optimal solution has the structure where players never change their feature vectors.
We also show that a noisier signal leads to better equilibria outcomes.
arXiv Detail & Related papers (2020-05-17T21:49:41Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.