Strategic Classification in the Dark
- URL: http://arxiv.org/abs/2102.11592v1
- Date: Tue, 23 Feb 2021 10:13:54 GMT
- Title: Strategic Classification in the Dark
- Authors: Ganesh Ghalme, Vineet Nair, Itay Eilat, Inbal Talgam-Cohen, and Nir
Rosenfeld
- Abstract summary: This paper studies the interaction between a classification rule and the strategic agents it governs.
We define the price of opacity as the difference in prediction error between opaque and transparent strategy-robust classifiers.
Our experiments show how Hardt et al.'s robust classifier is affected by keeping agents in the dark.
- Score: 9.281044712121423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strategic classification studies the interaction between a classification
rule and the strategic agents it governs. Under the assumption that the
classifier is known, rational agents respond to it by manipulating their
features. However, in many real-life scenarios of high-stake classification
(e.g., credit scoring), the classifier is not revealed to the agents, which
leads agents to attempt to learn the classifier and game it too. In this paper
we generalize the strategic classification model to such scenarios. We define
the price of opacity as the difference in prediction error between opaque and
transparent strategy-robust classifiers, characterize it, and give a sufficient
condition for this price to be strictly positive, in which case transparency is
the recommended policy. Our experiments show how Hardt et al.'s robust
classifier is affected by keeping agents in the dark.
Related papers
- Strategic Classification With Externalities [11.36782598786846]
We propose a new variant of the strategic classification problem.
Motivated by real-world applications, our model crucially allows the manipulation of one agent to affect another.
We show that under certain assumptions, the pure Nash Equilibrium of this agent manipulation game is unique and can be efficiently computed.
arXiv Detail & Related papers (2024-10-10T15:28:04Z) - SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery [55.72840638180451]
Generalized Category Discovery aims to simultaneously uncover novel categories and accurately classify known ones.
Traditional methods, which lean heavily on self-supervision and contrastive learning, often fall short when distinguishing between fine-grained categories.
We introduce a novel concept called self-expertise', which enhances the model's ability to recognize subtle differences and uncover unknown categories.
arXiv Detail & Related papers (2024-08-26T15:53:50Z) - Bayesian Strategic Classification [11.439576371711711]
We study the study of partial information release by the learner in strategic classification.
We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate.
arXiv Detail & Related papers (2024-02-13T19:51:49Z) - A Universal Unbiased Method for Classification from Aggregate
Observations [115.20235020903992]
This paper presents a novel universal method of CFAO, which holds an unbiased estimator of the classification risk for arbitrary losses.
Our proposed method not only guarantees the risk consistency due to the unbiased risk estimator but also can be compatible with arbitrary losses.
arXiv Detail & Related papers (2023-06-20T07:22:01Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - Towards Fair Classification against Poisoning Attacks [52.57443558122475]
We study the poisoning scenario where the attacker can insert a small fraction of samples into training data.
We propose a general and theoretically guaranteed framework which accommodates traditional defense methods to fair classification against poisoning attacks.
arXiv Detail & Related papers (2022-10-18T00:49:58Z) - Addressing Strategic Manipulation Disparities in Fair Classification [15.032416453073086]
We show that individuals from minority groups often pay a higher cost to update their features.
We propose a constrained optimization framework that constructs classifiers that lower the strategic manipulation cost for minority groups.
Empirically, we show the efficacy of this approach over multiple real-world datasets.
arXiv Detail & Related papers (2022-05-22T14:59:40Z) - Trading via Selective Classification [3.5027291542274357]
We investigate the application of binary and ternary selective classification to trading strategy design.
For ternary classification, in addition to classes for the price going up or down, we include a third class that corresponds to relatively small price moves in either direction.
arXiv Detail & Related papers (2021-10-28T06:38:05Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - The Role of Randomness and Noise in Strategic Classification [7.972516140165492]
We investigate the problem of designing optimal classifiers in the strategic classification setting.
We show that in many natural cases, the imposed optimal solution has the structure where players never change their feature vectors.
We also show that a noisier signal leads to better equilibria outcomes.
arXiv Detail & Related papers (2020-05-17T21:49:41Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.