Capturing the Complexity of Human Strategic Decision-Making with Machine Learning
- URL: http://arxiv.org/abs/2408.07865v1
- Date: Thu, 15 Aug 2024 00:39:42 GMT
- Title: Capturing the Complexity of Human Strategic Decision-Making with Machine Learning
- Authors: Jian-Qiao Zhu, Joshua C. Peterson, Benjamin Enke, Thomas L. Griffiths,
- Abstract summary: We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games.
We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior.
- Score: 4.308322597847064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how people behave in strategic settings--where they make decisions based on their expectations about the behavior of others--is a long-standing problem in the behavioral sciences. We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games, analyzing over 90,000 human decisions across more than 2,400 procedurally generated games that span a much wider space than previous datasets. We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior, indicating that there is systematic variation that is not explained by those theories. We then modify the network to produce a new, interpretable behavioral model, revealing what the original network learned about people: their ability to optimally respond and their capacity to reason about others are dependent on the complexity of individual games. This context-dependence is critical in explaining deviations from the rational Nash equilibrium, response times, and uncertainty in strategic decisions. More broadly, our results demonstrate how machine learning can be applied beyond prediction to further help generate novel explanations of complex human behavior.
Related papers
- The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies [7.695481260089599]
We propose a strategic classification model that considers behavioral biases in human responses to algorithms.
We show how misperceptions of a classifier can lead to different types of discrepancies between biased and rational agents' responses.
We show that strategic agents with behavioral biases can benefit or (perhaps, unexpectedly) harm the firm compared to fully rational strategic agents.
arXiv Detail & Related papers (2024-10-23T17:42:54Z) - Language-based game theory in the age of artificial intelligence [0.6187270874122921]
We show that sentiment analysis can explain human behaviour beyond economic outcomes.
Our meta-analysis shows that sentiment analysis can explain human behaviour beyond economic outcomes.
We hope this work sets the stage for a novel game theoretical approach that emphasizes the importance of language in human decisions.
arXiv Detail & Related papers (2024-03-13T20:21:20Z) - Manipulation Risks in Explainable AI: The Implications of the
Disagreement Problem [0.0]
We provide an overview of the different strategies explanation providers could deploy to adapt the returned explanation to their benefit.
We analyse several objectives and concrete scenarios the providers could have to engage in.
It is crucial to investigate this issue now, before these methods are widely implemented, and propose some mitigation strategies.
arXiv Detail & Related papers (2023-06-24T07:21:28Z) - Learning signatures of decision making from many individuals playing the
same game [54.33783158658077]
We design a predictive framework that learns representations to encode an individual's 'behavioral style'
We apply our method to a large-scale behavioral dataset from 1,000 humans playing a 3-armed bandit task.
arXiv Detail & Related papers (2023-02-21T21:41:53Z) - JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions [75.42526766746515]
We propose a new commonsense reasoning dataset based on human's Interactive Fiction (IF) gameplay walkthroughs.
Our dataset focuses on the assessment of functional commonsense knowledge rules rather than factual knowledge.
Experiments show that the introduced dataset is challenging to previous machine reading models as well as the new large language models.
arXiv Detail & Related papers (2022-10-18T19:20:53Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Strategic Representation [20.43010800051863]
strategic machines might craft representations that manipulate their users.
We formalize this as a learning problem, and pursue algorithms for decision-making that are robust to manipulation.
Our main result is a learning algorithm that minimizes error despite strategic representations.
arXiv Detail & Related papers (2022-06-17T04:20:57Z) - Who Leads and Who Follows in Strategic Classification? [82.44386576129295]
We argue that the order of play in strategic classification is fundamentally determined by the relative frequencies at which the decision-maker and the agents adapt to each other's actions.
We show that a decision-maker with the freedom to choose their update frequency can induce learning dynamics that converge to Stackelberg equilibria with either order of play.
arXiv Detail & Related papers (2021-06-23T16:48:46Z) - Modeling the EdNet Dataset with Logistic Regression [0.0]
We describe our experience with competition from the perspective of educational data mining.
We discuss some basic results in the Kaggle system and our thoughts on how those results may have been improved.
arXiv Detail & Related papers (2021-05-17T20:30:36Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.