The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies
- URL: http://arxiv.org/abs/2410.18066v2
- Date: Fri, 25 Oct 2024 15:48:34 GMT
- Title: The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies
- Authors: Raman Ebrahimi, Kristen Vaccaro, Parinaz Naghizadeh,
- Abstract summary: We propose a strategic classification model that considers behavioral biases in human responses to algorithms.
We show how misperceptions of a classifier can lead to different types of discrepancies between biased and rational agents' responses.
We show that strategic agents with behavioral biases can benefit or (perhaps, unexpectedly) harm the firm compared to fully rational strategic agents.
- Score: 7.695481260089599
- License:
- Abstract: When humans are subject to an algorithmic decision system, they can strategically adjust their behavior accordingly (``game'' the system). While a growing line of literature on strategic classification has used game-theoretic modeling to understand and mitigate such gaming, these existing works consider standard models of fully rational agents. In this paper, we propose a strategic classification model that considers behavioral biases in human responses to algorithms. We show how misperceptions of a classifier (specifically, of its feature weights) can lead to different types of discrepancies between biased and rational agents' responses, and identify when behavioral agents over- or under-invest in different features. We also show that strategic agents with behavioral biases can benefit or (perhaps, unexpectedly) harm the firm compared to fully rational strategic agents. We complement our analytical results with user studies, which support our hypothesis of behavioral biases in human responses to the algorithm. Together, our findings highlight the need to account for human (cognitive) biases when designing AI systems, and providing explanations of them, to strategic human in the loop.
Related papers
- Capturing the Complexity of Human Strategic Decision-Making with Machine Learning [4.308322597847064]
We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games.
We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior.
arXiv Detail & Related papers (2024-08-15T00:39:42Z) - Measuring Strategization in Recommendation: Users Adapt Their Behavior to Shape Future Content [66.71102704873185]
We test for user strategization by conducting a lab experiment and survey.
We find strong evidence of strategization across outcome metrics, including participants' dwell time and use of "likes"
Our findings suggest that platforms cannot ignore the effect of their algorithms on user behavior.
arXiv Detail & Related papers (2024-05-09T07:36:08Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - Strategic Representation [20.43010800051863]
strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
arXiv Detail & Related papers (2022-06-17T04:20:57Z) - Assessing Human Interaction in Virtual Reality With Continually Learning
Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study [6.076137037890219]
We investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency.
We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions.
Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour.
arXiv Detail & Related papers (2021-12-14T22:46:44Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Who Leads and Who Follows in Strategic Classification? [82.44386576129295]
We argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other's actions.
We show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
arXiv Detail & Related papers (2021-06-23T16:48:46Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment [61.24399136715106]
We consider the issue of strategic behaviour in various peer-assessment tasks, including peer grading of exams or homeworks and peer review in hiring or promotions.
Our focus is on designing methods for detection of such manipulations.
Specifically, we consider a setting in which agents evaluate a subset of their peers and output rankings that are later aggregated to form a final ordering.
arXiv Detail & Related papers (2020-10-08T15:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.