Using Language Models to Decipher the Motivation Behind Human Behaviors
- URL: http://arxiv.org/abs/2503.15752v3
- Date: Sun, 06 Apr 2025 05:30:46 GMT
- Title: Using Language Models to Decipher the Motivation Behind Human Behaviors
- Authors: Yutong Xie, Qiaozhu Mei, Walter Yuan, Matthew O. Jackson,
- Abstract summary: We show that by varying prompts to a large language model, we can elicit a full range of human behaviors.<n>Then by analyzing which prompts are needed to elicit which behaviors, we can infer the motivations behind the human behaviors.
- Score: 17.855067753715797
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI presents a novel tool for deciphering the motivations behind human behaviors. We show that by varying prompts to a large language model, we can elicit a full range of human behaviors in a variety of different scenarios in terms of classic economic games. Then by analyzing which prompts are needed to elicit which behaviors, we can infer (decipher) the motivations behind the human behaviors. We also show how one can analyze the prompts to reveal relationships between the classic economic games, providing new insight into what different economic scenarios induce people to think about. We also show how this deciphering process can be used to understand differences in the behavioral tendencies of different populations.
Related papers
- A Taxonomy of Linguistic Expressions That Contribute To Anthropomorphism of Language Technologies [55.99010491370177]
anthropomorphism is the attribution of human-like qualities to non-human objects or entities.
To productively discuss the impacts of anthropomorphism, we need a shared vocabulary for the vast variety of ways that language can bemorphic.
arXiv Detail & Related papers (2025-02-14T02:43:46Z) - Eliciting Language Model Behaviors with Investigator Agents [93.34072434845162]
Language models exhibit complex, diverse behaviors when prompted with free-form text.<n>We study the problem of behavior elicitation, where the goal is to search for prompts that induce specific target behaviors.<n>We train investigator models to map randomly-chosen target behaviors to a diverse distribution of outputs that elicit them.
arXiv Detail & Related papers (2025-02-03T10:52:44Z) - Enhancing Human-Like Responses in Large Language Models [0.0]
We focus on techniques that enhance natural language understanding, conversational coherence, and emotional intelligence in AI systems.
The study evaluates various approaches, including fine-tuning with diverse datasets, incorporating psychological principles, and designing models that better mimic human reasoning patterns.
arXiv Detail & Related papers (2025-01-09T07:44:06Z) - How Different AI Chatbots Behave? Benchmarking Large Language Models in Behavioral Economics Games [20.129667072835773]
This paper presents a comprehensive analysis of five leading large language models (LLMs) as they navigate a series of behavioral economics games.<n>We aim to uncover and document both common and distinct behavioral patterns across a range of scenarios.<n>The findings provide valuable insights into the strategic preferences of each LLM, highlighting potential implications for their deployment in critical decision-making roles.
arXiv Detail & Related papers (2024-12-16T21:25:45Z) - Language-based game theory in the age of artificial intelligence [0.6187270874122921]
We show that sentiment analysis can explain human behaviour beyond economic outcomes.
Our meta-analysis shows that sentiment analysis can explain human behaviour beyond economic outcomes.
We hope this work sets the stage for a novel game theoretical approach that emphasizes the importance of language in human decisions.
arXiv Detail & Related papers (2024-03-13T20:21:20Z) - UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations [62.71847873326847]
We investigate the ability to model unusual, unexpected, and unlikely situations.
Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation.
We release a new English language corpus called UNcommonsense.
arXiv Detail & Related papers (2023-11-14T19:00:55Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - From Outcome-Based to Language-Based Preferences [13.05235037907183]
We review the literature on models that try to explain human behavior in social interactions described by normal-form games with monetary payoffs.
We focus on the growing body of research showing that people react to the language in which actions are described, especially when it activates moral concerns.
arXiv Detail & Related papers (2022-06-15T05:11:58Z) - Ethical-Advice Taker: Do Language Models Understand Natural Language
Interventions? [62.74872383104381]
We investigate the effectiveness of natural language interventions for reading-comprehension systems.
We propose a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering (QA) model's unethical behavior.
arXiv Detail & Related papers (2021-06-02T20:57:58Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.