Strategic Classification under Unknown Personalized Manipulation
- URL: http://arxiv.org/abs/2305.16501v2
- Date: Mon, 15 Jan 2024 16:39:52 GMT
- Title: Strategic Classification under Unknown Personalized Manipulation
- Authors: Han Shao, Avrim Blum, Omar Montasser
- Abstract summary: We study the fundamental mistake bound and sample complexity in the strategic classification.
Agents can strategically manipulate their feature vector up to an extent in order to be predicted as positive.
- Score: 32.710715217516025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the fundamental mistake bound and sample complexity in the strategic
classification, where agents can strategically manipulate their feature vector
up to an extent in order to be predicted as positive. For example, given a
classifier determining college admission, student candidates may try to take
easier classes to improve their GPA, retake SAT and change schools in an effort
to fool the classifier. Ball manipulations are a widely studied class of
manipulations in the literature, where agents can modify their feature vector
within a bounded radius ball. Unlike most prior work, our work considers
manipulations to be personalized, meaning that agents can have different levels
of manipulation abilities (e.g., varying radii for ball manipulations), and
unknown to the learner.
We formalize the learning problem in an interaction model where the learner
first deploys a classifier and the agent manipulates the feature vector within
their manipulation set to game the deployed classifier. We investigate various
scenarios in terms of the information available to the learner during the
interaction, such as observing the original feature vector before or after
deployment, observing the manipulated feature vector, or not seeing either the
original or the manipulated feature vector. We begin by providing online
mistake bounds and PAC sample complexity in these scenarios for ball
manipulations. We also explore non-ball manipulations and show that, even in
the simplest scenario where both the original and the manipulated feature
vectors are revealed, the mistake bounds and sample complexity are lower
bounded by $\Omega(|H|)$ when the target function belongs to a known class $H$.
Related papers
- Strategic Classification With Externalities [11.36782598786846]
We propose a new variant of the strategic classification problem.
Motivated by real-world applications, our model crucially allows the manipulation of one agent to affect another.
We show that under certain assumptions, the pure Nash Equilibrium of this agent manipulation game is unique and can be efficiently computed.
arXiv Detail & Related papers (2024-10-10T15:28:04Z) - Knowledge Composition using Task Vectors with Learned Anisotropic Scaling [51.4661186662329]
We introduce aTLAS, an algorithm that linearly combines parameter blocks with different learned coefficients, resulting in anisotropic scaling at the task vector level.
We show that such linear combinations explicitly exploit the low intrinsicity of pre-trained models, with only a few coefficients being the learnable parameters.
We demonstrate the effectiveness of our method in task arithmetic, few-shot recognition and test-time adaptation, with supervised or unsupervised objectives.
arXiv Detail & Related papers (2024-07-03T07:54:08Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Bayesian Strategic Classification [11.439576371711711]
We study the study of partial information release by the learner in strategic classification.
We show how such partial information release can, counter-intuitively, benefit the learner's accuracy, despite increasing agents' abilities to manipulate.
arXiv Detail & Related papers (2024-02-13T19:51:49Z) - Fundamental Bounds on Online Strategic Classification [13.442155854812528]
We show that no deterministic algorithm can achieve a mistake bound $o(Delta)$ in the strategic setting.
We also extend this to the agnostic setting and obtain an algorithm with a $Delta$ multiplicative regret.
We design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries.
arXiv Detail & Related papers (2023-02-23T22:39:43Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - Learning Losses for Strategic Classification [5.812499828391904]
We take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule.
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph.
arXiv Detail & Related papers (2022-03-25T02:26:16Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - The Strategic Perceptron [11.078814063722803]
In presence of strategic agents that desire to be classified as positive, the Perceptron algorithm may not be able to observe the true position of agents.
We show examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes.
Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents.
arXiv Detail & Related papers (2020-08-04T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.