Spontaneous Giving and Calculated Greed in Language Models
- URL: http://arxiv.org/abs/2502.17720v2
- Date: Mon, 03 Mar 2025 04:31:48 GMT
- Title: Spontaneous Giving and Calculated Greed in Language Models
- Authors: Yuxuan Li, Hirokazu Shirado,
- Abstract summary: We find that reasoning models significantly reduce cooperation and norm enforcement, prioritizing individual rationality.<n>Our results suggest the need for AI architectures that incorporate social intelligence alongside reasoning capabilities to ensure that AI supports, rather than disrupts, human cooperative intuition.
- Score: 5.754869099304775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models demonstrate advanced problem-solving capabilities by incorporating reasoning techniques such as chain of thought and reflection. However, how these reasoning capabilities extend to social intelligence remains unclear. In this study, we investigate this question using economic games that model social dilemmas, where social intelligence plays a crucial role. First, we examine the effects of chain-of-thought and reflection techniques in a public goods game. We then extend our analysis to six economic games on cooperation and punishment, comparing off-the-shelf non-reasoning and reasoning models. We find that reasoning models significantly reduce cooperation and norm enforcement, prioritizing individual rationality. Consequently, groups with more reasoning models exhibit less cooperation and lower gains through repeated interactions. These behaviors parallel human tendencies of "spontaneous giving and calculated greed." Our results suggest the need for AI architectures that incorporate social intelligence alongside reasoning capabilities to ensure that AI supports, rather than disrupts, human cooperative intuition.
Related papers
- Social Genome: Grounded Social Reasoning Abilities of Multimodal Models [61.88413918026431]
Social Genome is the first benchmark for fine-grained, grounded social reasoning abilities of multimodal models.
It contains 272 videos of interactions and 1,486 human-annotated reasoning traces related to inferences about these interactions.
Social Genome is also the first modeling challenge to study external knowledge in social reasoning.
arXiv Detail & Related papers (2025-02-21T00:05:40Z) - The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks [96.27754404942364]
Large Reasoning Models (LRMs) represent a breakthrough in AI problem-solving capabilities, but their effectiveness in interactive environments can be limited.
This paper introduces and analyzes overthinking in LRMs.
We observe three recurring patterns: Analysis Paralysis, Rogue Actions, and Premature Disengagement.
arXiv Detail & Related papers (2025-02-12T09:23:26Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - AI and Social Theory [0.0]
We sketch a programme for AI driven social theory, starting by defining what we mean by artificial intelligence (AI)
We then lay out our model for how AI based models can draw on the growing availability of digital data to help test the validity of different social theories based on their predictive power.
arXiv Detail & Related papers (2024-07-07T12:26:16Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Language-based game theory in the age of artificial intelligence [0.6187270874122921]
We show that sentiment analysis can explain human behaviour beyond economic outcomes.
Our meta-analysis shows that sentiment analysis can explain human behaviour beyond economic outcomes.
We hope this work sets the stage for a novel game theoretical approach that emphasizes the importance of language in human decisions.
arXiv Detail & Related papers (2024-03-13T20:21:20Z) - Visual cognition in multimodal large language models [12.603212933816206]
Recent advancements have rekindled interest in the potential to emulate human-like cognitive abilities.
This paper evaluates the current state of vision-based large language models in the domains of intuitive physics, causal reasoning, and intuitive psychology.
arXiv Detail & Related papers (2023-11-27T18:58:34Z) - UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations [62.71847873326847]
We investigate the ability to model unusual, unexpected, and unlikely situations.
Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation.
We release a new English language corpus called UNcommonsense.
arXiv Detail & Related papers (2023-11-14T19:00:55Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Cognitive Models as Simulators: The Case of Moral Decision-Making [9.024707986238392]
In this work, we substantiate the idea of $textitcognitive models as simulators$, which is to have AI systems interact with, and collect feedback from, cognitive models instead of humans.
Here, we leverage this idea in the context of moral decision-making, by having reinforcement learning agents learn about fairness through interacting with a cognitive model of the Ultimatum Game (UG)
Our work suggests that using cognitive models as simulators of humans is an effective approach for training AI systems, presenting an important way for computational cognitive science to make contributions to AI.
arXiv Detail & Related papers (2022-10-08T23:14:14Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Learning Human Rewards by Inferring Their Latent Intelligence Levels in
Multi-Agent Games: A Theory-of-Mind Approach with Application to Driving Data [18.750834997334664]
We argue that humans are bounded rational and have different intelligence levels when reasoning about others' decision-making process.
We propose a new multi-agent Inverse Reinforcement Learning framework that reasons about humans' latent intelligence levels during learning.
arXiv Detail & Related papers (2021-03-07T07:48:31Z) - Multi-Principal Assistance Games [11.85513759444069]
Impossibility theorems in social choice theory and voting theory can be applied to such games.
We analyze in particular a bandit apprentice game in which the humans act first to demonstrate their individual preferences for the arms.
We propose a social choice method that uses shared control of a system to combine preference inference with social welfare optimization.
arXiv Detail & Related papers (2020-07-19T00:23:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.