Learning Losses for Strategic Classification
- URL: http://arxiv.org/abs/2203.13421v1
- Date: Fri, 25 Mar 2022 02:26:16 GMT
- Title: Learning Losses for Strategic Classification
- Authors: Tosca Lechner and Ruth Urner
- Abstract summary: We take a learning theoretic perspective, focusing on the sample complexity needed to learn a good decision rule.
We analyse the sample complexity for a known graph of possible manipulations in terms of the complexity of the function class and the manipulation graph.
Using techniques from transfer learning theory, we define a similarity measure for manipulation graphs and show that learning outcomes are robust with respect to small changes in the manipulation graph.
- Score: 5.812499828391904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Strategic classification, i.e. classification under possible strategic
manipulations of features, has received a lot of attention from both the
machine learning and the game theory community. Most works focus on analysing
properties of the optimal decision rule under such manipulations. In our work
we take a learning theoretic perspective, focusing on the sample complexity
needed to learn a good decision rule which is robust to strategic manipulation.
We perform this analysis by introducing a novel loss function, the
\emph{strategic manipulation loss}, which takes into account both the accuracy
of the final decision rule and its vulnerability to manipulation. We analyse
the sample complexity for a known graph of possible manipulations in terms of
the complexity of the function class and the manipulation graph. Additionally,
we initialize the study of learning under unknown manipulation capabilities of
the involved agents. Using techniques from transfer learning theory, we define
a similarity measure for manipulation graphs and show that learning outcomes
are robust with respect to small changes in the manipulation graph. Lastly, we
analyse the (sample complexity of) learning of the manipulation capability of
agents with respect to this similarity measure, providing novel guarantees for
strategic classification with respect to an unknown manipulation graph.
Related papers
- Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification [22.031509365704423]
We study the problem of online binary classification in settings where strategic agents can modify their observable features to receive a positive classification.
We introduce the Strategic Littlestone Dimension, a new measure that captures the joint complexity of the hypothesis class and the manipulation graph.
We derive regret bounds in both the realizable setting where all agents manipulate according to the same graph within the graph family, and the agnostic setting where the manipulation graphs are chosen adversarially and not consistently modeled by a single graph in the family.
arXiv Detail & Related papers (2024-07-16T11:31:20Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Scalable and Equitable Math Problem Solving Strategy Prediction in Big
Educational Data [2.86829428083307]
We develop an embedding called MVec where we learn a representation based on the mastery of students.
We then cluster these embeddings with a non-parametric clustering method.
We show that our approach can scale up to achieve high accuracy by training on a small sample of a large dataset.
arXiv Detail & Related papers (2023-08-07T19:51:10Z) - Strategic Classification under Unknown Personalized Manipulation [32.710715217516025]
We study the fundamental mistake bound and sample complexity in the strategic classification.
Agents can strategically manipulate their feature vector up to an extent in order to be predicted as positive.
arXiv Detail & Related papers (2023-05-25T22:07:29Z) - Strategic Representation [20.43010800051863]
strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
arXiv Detail & Related papers (2022-06-17T04:20:57Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - A Boosting Approach to Reinforcement Learning [59.46285581748018]
We study efficient algorithms for reinforcement learning in decision processes whose complexity is independent of the number of states.
We give an efficient algorithm that is capable of improving the accuracy of such weak learning methods.
arXiv Detail & Related papers (2021-08-22T16:00:45Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.