Strategic Classification Made Practical
- URL: http://arxiv.org/abs/2103.01826v1
- Date: Tue, 2 Mar 2021 16:03:26 GMT
- Title: Strategic Classification Made Practical
- Authors: Sagi Levanon and Nir Rosenfeld
- Abstract summary: We present a learning framework for strategic classification that is practical.
Our approach directly minimizes the "strategic" empirical risk, achieved by differentiating through the strategic response of users.
A series of experiments demonstrates the effectiveness of our approach on various learning settings.
- Score: 8.778578967271866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Strategic classification regards the problem of learning in settings where
users can strategically modify their features to improve outcomes. This setting
applies broadly and has received much recent attention. But despite its
practical significance, work in this space has so far been predominantly
theoretical. In this paper we present a learning framework for strategic
classification that is practical. Our approach directly minimizes the
"strategic" empirical risk, achieved by differentiating through the strategic
response of users. This provides flexibility that allows us to extend beyond
the original problem formulation and towards more realistic learning scenarios.
A series of experiments demonstrates the effectiveness of our approach on
various learning settings.
Related papers
- LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Program-Based Strategy Induction for Reinforcement Learning [5.657991642023959]
We use Bayesian program induction to discover strategies implemented by programs, letting the simplicity of strategies trade off against their effectiveness.
We find strategies that are difficult or unexpected with classical incremental learning, like asymmetric learning from rewarded and unrewarded trials, adaptive horizon-dependent random exploration, and discrete state switching.
arXiv Detail & Related papers (2024-02-26T15:40:46Z) - Classification Under Strategic Self-Selection [13.168262355330299]
We study the effects of self-selection on learning and the implications of learning on the composition of the self-selected population.
We propose a differentiable framework for learning under self-selective behavior, which can be optimized effectively.
arXiv Detail & Related papers (2024-02-23T11:37:56Z) - Representation-Driven Reinforcement Learning [57.44609759155611]
We present a representation-driven framework for reinforcement learning.
By representing policies as estimates of their expected values, we leverage techniques from contextual bandits to guide exploration and exploitation.
We demonstrate the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches.
arXiv Detail & Related papers (2023-05-31T14:59:12Z) - Strategic Classification with Graph Neural Networks [10.131895986034316]
Using a graph for learning introduces inter-user dependencies in prediction.
We propose a differentiable framework for strategically-robust learning of graph-based classifiers.
arXiv Detail & Related papers (2022-05-31T13:11:25Z) - Generalized Strategic Classification and the Case of Aligned Incentives [16.607142366834015]
We argue for a broader perspective on what accounts for strategic user behavior.
Our model subsumes most current models, but includes other novel settings.
We show how our results and approach can extend to the most general case.
arXiv Detail & Related papers (2022-02-09T09:36:09Z) - Curriculum Design for Teaching via Demonstrations: Theory and
Applications [29.71112499480574]
We study how to design a personalized curriculum over demonstrations to speed up the learner's convergence.
We provide a unified curriculum strategy for two popular learner models: Causal Entropy Inverse Reinforcement Learning (MaxEnt-IRL) and Cross-Entropy Behavioral Cloning (CrossEnt-BC)
arXiv Detail & Related papers (2021-06-08T21:15:00Z) - Off-Policy Imitation Learning from Observations [78.30794935265425]
Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit.
We propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner.
Our approach is comparable with state-of-the-art locomotion in terms of both sample-efficiency and performance.
arXiv Detail & Related papers (2021-02-25T21:33:47Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Learning Adaptive Exploration Strategies in Dynamic Environments Through
Informed Policy Regularization [100.72335252255989]
We study the problem of learning exploration-exploitation strategies that effectively adapt to dynamic environments.
We propose a novel algorithm that regularizes the training of an RNN-based policy using informed policies trained to maximize the reward in each task.
arXiv Detail & Related papers (2020-05-06T16:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.