ElementaryNet: A Non-Strategic Neural Network for Predicting Human Behavior in Normal-Form Games
- URL: http://arxiv.org/abs/2503.05925v2
- Date: Fri, 08 Aug 2025 23:36:46 GMT
- Title: ElementaryNet: A Non-Strategic Neural Network for Predicting Human Behavior in Normal-Form Games
- Authors: Greg d'Eon, Hala Murad, Kevin Leyton-Brown, James R. Wright,
- Abstract summary: Behavioral game theory models serve two purposes: yielding insights into how human decision-making works, and predicting how people would behave in novel strategic settings.<n>A system called GameNet represents the state of the art for predicting human behavior in the setting of unrepeated simultaneous-move games.<n>We show that it is possible to derive insights about human behavior by varying ElementaryNet's features and interpreting its parameters.
- Score: 11.093095696026861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Behavioral game theory models serve two purposes: yielding insights into how human decision-making works, and predicting how people would behave in novel strategic settings. A system called GameNet represents the state of the art for predicting human behavior in the setting of unrepeated simultaneous-move games, combining a simple "level-k" model of strategic reasoning with a complex neural network model of non-strategic "level-0" behavior. Although this reliance on well-established ideas from cognitive science ought to make GameNet interpretable, the flexibility of its level-0 model raises the possibility that it is able to emulate strategic reasoning. In this work, we prove that GameNet's level-0 model is indeed too general. We then introduce ElementaryNet, a novel neural network that is provably incapable of expressing strategic behavior. We show that these additional restrictions are empirically harmless, leading ElementaryNet to statistically indistinguishable predictive performance vs GameNet. We then show how it is possible to derive insights about human behavior by varying ElementaryNet's features and interpreting its parameters, finding evidence of iterative reasoning, learning about the depth of this reasoning process, and showing the value of a rich level-0 specification.
Related papers
- People use fast, flat goal-directed simulation to reason about novel problems [68.55490343866545]
We show that people are systematic and adaptively rational in how they play a game for the first time.<n>We explain these capacities via a computational cognitive model that we call the "Intuitive Gamer"<n>Our work offers new insights into how people rapidly evaluate, act, and make suggestions when encountering novel problems.
arXiv Detail & Related papers (2025-10-13T15:12:08Z) - CognitionNet: A Collaborative Neural Network for Play Style Discovery in Online Skill Gaming Platform [6.665636945186558]
We propose a two stage deep neural network, CognitionNet.<n>The first stage focuses on mining game behaviours as cluster representations in a latent space.<n>The second aggregates over these micro patterns to discover play styles.
arXiv Detail & Related papers (2025-05-01T05:51:19Z) - Game Theory Meets Statistical Mechanics in Deep Learning Design [0.06990493129893112]
We present a novel deep representation that seamlessly merges principles of game theory with laws of statistical mechanics.
It performs feature extraction, dimensionality reduction, and pattern classification within a single learning framework.
arXiv Detail & Related papers (2024-10-16T06:02:18Z) - Capturing the Complexity of Human Strategic Decision-Making with Machine Learning [4.308322597847064]
We conduct the largest study to date of strategic decision-making in the context of initial play in two-player matrix games.
We show that a deep neural network trained on these data predicts people's choices better than leading theories of strategic behavior.
arXiv Detail & Related papers (2024-08-15T00:39:42Z) - Graph Mining under Data scarcity [6.229055041065048]
We propose an Uncertainty Estimator framework that can be applied on top of any generic Graph Neural Networks (GNNs)
We train these models under the classic episodic learning paradigm in the $n$-way, $k$-shot fashion, in an end-to-end setting.
Our method outperforms the baselines, which demonstrates the efficacy of the Uncertainty Estimator for Few-shot node classification on graphs with a GNN.
arXiv Detail & Related papers (2024-06-07T10:50:03Z) - A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Game-Theoretic Unlearnable Example Generator [18.686469222136854]
Unlearnable example attacks aim to degrade the clean test accuracy of deep learning by adding imperceptible perturbations to the training samples.
In this paper, we investigate unlearnable example attacks from a game-theoretic perspective, by formulating the attack as a nonzero sum Stackelberg game.
We propose a novel attack method, called the Game Unlearnable Example (GUE), which has three main gradients.
arXiv Detail & Related papers (2024-01-31T00:43:30Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - NetHack is Hard to Hack [37.24009814390211]
In the NeurIPS 2021 NetHack Challenge, symbolic agents outperformed neural approaches by over four times in median game score.
We present an extensive study on neural policy learning for NetHack.
We produce a state-of-the-art neural agent that surpasses previous fully neural policies by 127% in offline settings and 25% in online settings on median game score.
arXiv Detail & Related papers (2023-05-30T17:30:17Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Neural Additive Models for Location Scale and Shape: A Framework for
Interpretable Neural Regression Beyond the Mean [1.0923877073891446]
Deep neural networks (DNNs) have proven to be highly effective in a variety of tasks.
Despite this success, the inner workings of DNNs are often not transparent.
This lack of interpretability has led to increased research on inherently interpretable neural networks.
arXiv Detail & Related papers (2023-01-27T17:06:13Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Evaluation Beyond Task Performance: Analyzing Concepts in AlphaZero in
Hex [39.001544338346655]
We investigate AlphaZero's internal representations in the game of Hex using two evaluation techniques from natural language processing (NLP): model probing and behavioral tests.
We find that concepts related to short-term end-game planning are best encoded in the final layers of the model, whereas concepts related to long-term planning are encoded in the middle layers of the model.
arXiv Detail & Related papers (2022-11-26T21:59:11Z) - Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task [75.35278593566068]
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear.
Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see?
We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello.
arXiv Detail & Related papers (2022-10-24T16:29:55Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Towards Disentangling Information Paths with Coded ResNeXt [11.884259630414515]
We take a novel approach to enhance the transparency of the function of the whole network.
We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths.
arXiv Detail & Related papers (2022-02-10T21:45:49Z) - Teach me to play, gamer! Imitative learning in computer games via
linguistic description of complex phenomena and decision tree [55.41644538483948]
We present a new machine learning model by imitation based on the linguistic description of complex phenomena.
The method can be a good alternative to design and implement the behaviour of intelligent agents in video game development.
arXiv Detail & Related papers (2021-01-06T21:14:10Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language
Learning [78.3857991931479]
We present GROLLA, an evaluation framework for Grounded Language Learning with Attributes.
We also propose a new dataset CompGuessWhat?! as an instance of this framework for evaluating the quality of learned neural representations.
arXiv Detail & Related papers (2020-06-03T11:21:42Z) - iCapsNets: Towards Interpretable Capsule Networks for Text
Classification [95.31786902390438]
Traditional machine learning methods are easy to interpret but have low accuracies.
We propose interpretable capsule networks (iCapsNets) to bridge this gap.
iCapsNets can be interpreted both locally and globally.
arXiv Detail & Related papers (2020-05-16T04:11:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.