Spontaneous Giving and Calculated Greed in Language Models
- URL: http://arxiv.org/abs/2502.17720v2
- Date: Mon, 03 Mar 2025 04:31:48 GMT
- Title: Spontaneous Giving and Calculated Greed in Language Models
- Authors: Yuxuan Li, Hirokazu Shirado,
- Abstract summary: We find that reasoning models significantly reduce cooperation and norm enforcement, prioritizing individual rationality.<n>Our results suggest the need for AI architectures that incorporate social intelligence alongside reasoning capabilities to ensure that AI supports, rather than disrupts, human cooperative intuition.
- Score: 5.754869099304775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models demonstrate advanced problem-solving capabilities by incorporating reasoning techniques such as chain of thought and reflection. However, how these reasoning capabilities extend to social intelligence remains unclear. In this study, we investigate this question using economic games that model social dilemmas, where social intelligence plays a crucial role. First, we examine the effects of chain-of-thought and reflection techniques in a public goods game. We then extend our analysis to six economic games on cooperation and punishment, comparing off-the-shelf non-reasoning and reasoning models. We find that reasoning models significantly reduce cooperation and norm enforcement, prioritizing individual rationality. Consequently, groups with more reasoning models exhibit less cooperation and lower gains through repeated interactions. These behaviors parallel human tendencies of "spontaneous giving and calculated greed." Our results suggest the need for AI architectures that incorporate social intelligence alongside reasoning capabilities to ensure that AI supports, rather than disrupts, human cooperative intuition.
Related papers
- Thinking About Thinking: SAGE-nano's Inverse Reasoning for Self-Aware Language Models [0.0]
Large Language Models (LLMs) have demonstrated remarkable capabilities at solving complex reasoning tasks with Chain-of-Thought prompting.<n>We introduce textbfinverse reasoning, a novel paradigm enabling LLMs to decompose and explain their own reasoning chains post-hoc.<n>Our work creates new avenues for transparent AI systems and closes significant gaps in AI safety, education, and scientific discovery.
arXiv Detail & Related papers (2025-06-30T09:53:41Z) - Bayesian Social Deduction with Graph-Informed Language Models [3.7540464038118633]
Social reasoning remains a challenging task for large language models.<n>We introduce a hybrid reasoning framework that externalizes belief inference to a structured probabilistic model.<n>Our approach achieves competitive performance with much larger models in Agent-Agent play.
arXiv Detail & Related papers (2025-06-21T18:45:28Z) - Thinkless: LLM Learns When to Think [57.857534644932194]
Reasoning Language Models, capable of extended chain-of-thought reasoning, have demonstrated remarkable performance on tasks requiring complex logical inference.<n>We propose Thinkless, a learnable framework that empowers an LLM to adaptively select between short-form and long-form reasoning.<n>On several benchmarks such as Minerva Algebra, MATH-500, and GSM8K, Thinkless is able to reduce the usage of long-chain thinking by 50% - 90%.
arXiv Detail & Related papers (2025-05-19T17:24:16Z) - Social Genome: Grounded Social Reasoning Abilities of Multimodal Models [61.88413918026431]
Social Genome is the first benchmark for fine-grained, grounded social reasoning abilities of multimodal models.
It contains 272 videos of interactions and 1,486 human-annotated reasoning traces related to inferences about these interactions.
Social Genome is also the first modeling challenge to study external knowledge in social reasoning.
arXiv Detail & Related papers (2025-02-21T00:05:40Z) - The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks [96.27754404942364]
Large Reasoning Models (LRMs) represent a breakthrough in AI problem-solving capabilities, but their effectiveness in interactive environments can be limited.
This paper introduces and analyzes overthinking in LRMs.
We observe three recurring patterns: Analysis Paralysis, Rogue Actions, and Premature Disengagement.
arXiv Detail & Related papers (2025-02-12T09:23:26Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - AI and Social Theory [0.0]
We sketch a programme for AI driven social theory, starting by defining what we mean by artificial intelligence (AI)
We then lay out our model for how AI based models can draw on the growing availability of digital data to help test the validity of different social theories based on their predictive power.
arXiv Detail & Related papers (2024-07-07T12:26:16Z) - Nicer Than Humans: How do Large Language Models Behave in the Prisoner's Dilemma? [0.1474723404975345]
We study the cooperative behavior of Llama2 when playing the Iterated Prisoner's Dilemma against random adversaries displaying various levels of hostility.
We find that Llama2 tends not to initiate defection but it adopts a cautious approach towards cooperation.
In comparison to prior research on human participants, Llama2 exhibits a greater inclination towards cooperative behavior.
arXiv Detail & Related papers (2024-06-19T14:51:14Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Language-based game theory in the age of artificial intelligence [0.6187270874122921]
We show that sentiment analysis can explain human behaviour beyond economic outcomes.
Our meta-analysis shows that sentiment analysis can explain human behaviour beyond economic outcomes.
We hope this work sets the stage for a novel game theoretical approach that emphasizes the importance of language in human decisions.
arXiv Detail & Related papers (2024-03-13T20:21:20Z) - PHAnToM: Persona-based Prompting Has An Effect on Theory-of-Mind Reasoning in Large Language Models [25.657579792829743]
We empirically evaluate how role-playing prompting influences Theory-of-Mind (ToM) reasoning capabilities.
We propose the mechanism that, beyond the inherent variance in the complexity of reasoning tasks, performance differences arise because of socially-motivated prompting differences.
arXiv Detail & Related papers (2024-03-04T17:34:34Z) - Visual cognition in multimodal large language models [12.603212933816206]
Recent advancements have rekindled interest in the potential to emulate human-like cognitive abilities.
This paper evaluates the current state of vision-based large language models in the domains of intuitive physics, causal reasoning, and intuitive psychology.
arXiv Detail & Related papers (2023-11-27T18:58:34Z) - UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations [62.71847873326847]
We investigate the ability to model unusual, unexpected, and unlikely situations.
Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation.
We release a new English language corpus called UNcommonsense.
arXiv Detail & Related papers (2023-11-14T19:00:55Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Probing the Multi-turn Planning Capabilities of LLMs via 20 Question
Games [14.063311955315077]
Large language models (LLMs) are effective at answering questions that are clearly asked.
When faced with ambiguous queries they can act unpredictably and produce incorrect outputs.
This underscores the need for the development of intelligent agents capable of asking clarification questions to resolve ambiguities effectively.
arXiv Detail & Related papers (2023-10-02T16:55:37Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge Reasoning via Promoting Causal Consistency in LLMs [55.66353783572259]
Causal-Consistency Chain-of-Thought harnesses multi-agent collaboration to bolster the faithfulness and causality of foundation models.<n>Our framework demonstrates significant superiority over state-of-the-art methods through extensive and comprehensive evaluations.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Cognitive Models as Simulators: The Case of Moral Decision-Making [9.024707986238392]
In this work, we substantiate the idea of $textitcognitive models as simulators$, which is to have AI systems interact with, and collect feedback from, cognitive models instead of humans.
Here, we leverage this idea in the context of moral decision-making, by having reinforcement learning agents learn about fairness through interacting with a cognitive model of the Ultimatum Game (UG)
Our work suggests that using cognitive models as simulators of humans is an effective approach for training AI systems, presenting an important way for computational cognitive science to make contributions to AI.
arXiv Detail & Related papers (2022-10-08T23:14:14Z) - Incorporating Rivalry in Reinforcement Learning for a Competitive Game [65.2200847818153]
This work proposes a novel reinforcement learning mechanism based on the social impact of rivalry behavior.
Our proposed model aggregates objective and social perception mechanisms to derive a rivalry score that is used to modulate the learning of artificial agents.
arXiv Detail & Related papers (2022-08-22T14:06:06Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Learning Human Rewards by Inferring Their Latent Intelligence Levels in
Multi-Agent Games: A Theory-of-Mind Approach with Application to Driving Data [18.750834997334664]
We argue that humans are bounded rational and have different intelligence levels when reasoning about others' decision-making process.
We propose a new multi-agent Inverse Reinforcement Learning framework that reasons about humans' latent intelligence levels during learning.
arXiv Detail & Related papers (2021-03-07T07:48:31Z) - COSMO: Conditional SEQ2SEQ-based Mixture Model for Zero-Shot Commonsense
Question Answering [50.65816570279115]
Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning.
Current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation.
We present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation.
arXiv Detail & Related papers (2020-11-02T07:08:19Z) - Multi-Principal Assistance Games [11.85513759444069]
Impossibility theorems in social choice theory and voting theory can be applied to such games.
We analyze in particular a bandit apprentice game in which the humans act first to demonstrate their individual preferences for the arms.
We propose a social choice method that uses shared control of a system to combine preference inference with social welfare optimization.
arXiv Detail & Related papers (2020-07-19T00:23:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.