Learnability Gaps of Strategic Classification
- URL: http://arxiv.org/abs/2402.19303v1
- Date: Thu, 29 Feb 2024 16:09:19 GMT
- Title: Learnability Gaps of Strategic Classification
- Authors: Lee Cohen, Yishay Mansour, Shay Moran, Han Shao
- Abstract summary: We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
- Score: 68.726857356532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In contrast with standard classification tasks, strategic classification
involves agents strategically modifying their features in an effort to receive
favorable predictions. For instance, given a classifier determining loan
approval based on credit scores, applicants may open or close their credit
cards to fool the classifier. The learning goal is to find a classifier robust
against strategic manipulations. Various settings, based on what and when
information is known, have been explored in strategic classification. In this
work, we focus on addressing a fundamental question: the learnability gaps
between strategic classification and standard learning.
We essentially show that any learnable class is also strategically learnable:
we first consider a fully informative setting, where the manipulation structure
(which is modeled by a manipulation graph $G^\star$) is known and during
training time the learner has access to both the pre-manipulation data and
post-manipulation data. We provide nearly tight sample complexity and regret
bounds, offering significant improvements over prior results. Then, we relax
the fully informative setting by introducing two natural types of uncertainty.
First, following Ahmadi et al. (2023), we consider the setting in which the
learner only has access to the post-manipulation data. We improve the results
of Ahmadi et al. (2023) and close the gap between mistake upper bound and lower
bound raised by them. Our second relaxation of the fully informative setting
introduces uncertainty to the manipulation structure. That is, we assume that
the manipulation graph is unknown but belongs to a known class of graphs. We
provide nearly tight bounds on the learning complexity in various unknown
manipulation graph settings. Notably, our algorithm in this setting is of
independent interest and can be applied to other problems such as multi-label
learning.
Related papers
- Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification [22.031509365704423]
We study the problem of online binary classification in settings where strategic agents can modify their observable features to receive a positive classification.
We introduce the Strategic Littlestone Dimension, a new measure that captures the joint complexity of the hypothesis class and the manipulation graph.
We derive regret bounds in both the realizable setting where all agents manipulate according to the same graph within the graph family, and the agnostic setting where the manipulation graphs are chosen adversarially and not consistently modeled by a single graph in the family.
arXiv Detail & Related papers (2024-07-16T11:31:20Z) - Bayesian Strategic Classification [11.439576371711711]
We study the study of partial information release by the learner in strategic classification.
We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate.
arXiv Detail & Related papers (2024-02-13T19:51:49Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Strategic Representation [20.43010800051863]
strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
arXiv Detail & Related papers (2022-06-17T04:20:57Z) - Learning Losses for Strategic Classification [5.812499828391904]
We take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule.
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph.
arXiv Detail & Related papers (2022-03-25T02:26:16Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z) - SCAN: Learning to Classify Images without Labels [73.69513783788622]
We advocate a two-step approach where feature learning and clustering are decoupled.
A self-supervised task from representation learning is employed to obtain semantically meaningful features.
We obtain promising results on ImageNet, and outperform several semi-supervised learning methods in the low-data regime.
arXiv Detail & Related papers (2020-05-25T18:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.