Bayesian Strategic Classification
- URL: http://arxiv.org/abs/2402.08758v1
- Date: Tue, 13 Feb 2024 19:51:49 GMT
- Title: Bayesian Strategic Classification
- Authors: Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba
Ziani
- Abstract summary: We study the study of partial information release by the learner in strategic classification.
We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate.
- Score: 11.439576371711711
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In strategic classification, agents modify their features, at a cost, to
ideally obtain a positive classification from the learner's classifier. The
typical response of the learner is to carefully modify their classifier to be
robust to such strategic behavior. When reasoning about agent manipulations,
most papers that study strategic classification rely on the following strong
assumption: agents fully know the exact parameters of the deployed classifier
by the learner. This often is an unrealistic assumption when using complex or
proprietary machine learning techniques in real-world prediction tasks.
We initiate the study of partial information release by the learner in
strategic classification. We move away from the traditional assumption that
agents have full knowledge of the classifier. Instead, we consider agents that
have a common distributional prior on which classifier the learner is using.
The learner in our model can reveal truthful, yet not necessarily complete,
information about the deployed classifier to the agents. The learner's goal is
to release just enough information about the classifier to maximize accuracy.
We show how such partial information release can, counter-intuitively, benefit
the learner's accuracy, despite increasing agents' abilities to manipulate.
We show that while it is intractable to compute the best response of an agent
in the general case, there exist oracle-efficient algorithms that can solve the
best response of the agents when the learner's hypothesis class is the class of
linear classifiers, or when the agents' cost function satisfies a natural
notion of submodularity as we define. We then turn our attention to the
learner's optimization problem and provide both positive and negative results
on the algorithmic problem of how much information the learner should release
about the classifier to maximize their expected accuracy.
Related papers
- Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - PERFEX: Classifier Performance Explanations for Trustworthy AI Systems [0.9668407688201357]
Explanations make predictions actionable to the user.
Existing explanation methods, however, typically only provide explanations for individual predictions.
This paper presents a method to explain the qualities of a trained base classifier, called PERFormance EXplainer (PERFEX)
arXiv Detail & Related papers (2022-12-12T17:03:09Z) - Learning to Generate Novel Classes for Deep Metric Learning [24.048915378172012]
We introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors.
We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class.
Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples.
arXiv Detail & Related papers (2022-01-04T06:55:19Z) - Early Exiting with Ensemble Internal Classifiers [57.80488632985445]
Early exiting has gained much attention in the NLP community.
We propose a voting-based strategy that considers predictions of all the past internal classifiers to infer the correct label.
Experimental results on various NLP tasks show that our proposed objective function and voting-based strategy can achieve better accuracy-speed trade-offs.
arXiv Detail & Related papers (2021-05-28T12:54:11Z) - Class-Incremental Learning with Generative Classifiers [6.570917734205559]
We propose a new strategy for class-incremental learning: generative classification.
Our proposal is to learn the joint distribution p(x,y), factorized as p(x|y)p(y), and to perform classification using Bayes' rule.
As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned.
arXiv Detail & Related papers (2021-04-20T16:26:14Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.