Human behaviour through a LENS: How Linguistic content triggers Emotions and Norms and determines Strategy choices
- URL: http://arxiv.org/abs/2403.15293v1
- Date: Fri, 22 Mar 2024 15:40:11 GMT
- Title: Human behaviour through a LENS: How Linguistic content triggers Emotions and Norms and determines Strategy choices
- Authors: Valerio Capraro,
- Abstract summary: This article proposes a novel framework that transcends the traditional confines of outcome-based preference models.
According to the LENS model, the Linguistic description of the decision problem triggers Emotional responses and suggests potential Norms of behaviour.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last two decades, a growing body of experimental research has provided evidence that linguistic frames influence human behaviour in economic games, beyond the economic consequences of the available actions. This article proposes a novel framework that transcends the traditional confines of outcome-based preference models. According to the LENS model, the Linguistic description of the decision problem triggers Emotional responses and suggests potential Norms of behaviour, which then interact to shape an individual's Strategic choice. The article reviews experimental evidence that supports each path of the LENS model. Furthermore, it identifies and discusses several critical research questions that arise from this model, pointing towards avenues for future inquiry.
Related papers
- Capturing Human Cognitive Styles with Language: Towards an Experimental Evaluation Paradigm [8.479236801214816]
We introduce an experiment-based framework for evaluating language-based cognitive style models against human behavior.
We find that language features, intended to capture cognitive style, can predict participants' decision style with moderate-to-high accuracy.
arXiv Detail & Related papers (2025-02-18T23:08:15Z) - How Different AI Chatbots Behave? Benchmarking Large Language Models in Behavioral Economics Games [20.129667072835773]
This paper presents a comprehensive analysis of five leading large language models (LLMs) as they navigate a series of behavioral economics games.
We aim to uncover and document both common and distinct behavioral patterns across a range of scenarios.
The findings provide valuable insights into the strategic preferences of each LLM, highlighting potential implications for their deployment in critical decision-making roles.
arXiv Detail & Related papers (2024-12-16T21:25:45Z) - Diverging Preferences: When do Annotators Disagree and do Models Know? [92.24651142187989]
We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes.
We find that the majority of disagreements are in opposition with standard reward modeling approaches.
We develop methods for identifying diverging preferences to mitigate their influence on evaluation and training.
arXiv Detail & Related papers (2024-10-18T17:32:22Z) - A generative framework to bridge data-driven models and scientific theories in language neuroscience [84.76462599023802]
We present generative explanation-mediated validation, a framework for generating concise explanations of language selectivity in the brain.
We show that explanatory accuracy is closely related to the predictive power and stability of the underlying statistical models.
arXiv Detail & Related papers (2024-10-01T15:57:48Z) - Investigating Context Effects in Similarity Judgements in Large Language Models [6.421776078858197]
Large Language Models (LLMs) have revolutionised the capability of AI models in comprehending and generating natural language text.
We report an ongoing investigation on alignment of LLMs with human judgements affected by order bias.
arXiv Detail & Related papers (2024-08-20T10:26:02Z) - How Personality Traits Influence Negotiation Outcomes? A Simulation based on Large Language Models [2.7010154811483167]
This paper introduces a simulation framework centered on Large Language Model (LLM) agents endowed with synthesized personality traits.
The experimental results show that the behavioral tendencies of LLM-based simulations could reproduce behavioral patterns observed in human negotiations.
arXiv Detail & Related papers (2024-07-16T09:52:51Z) - Language-based game theory in the age of artificial intelligence [0.6187270874122921]
We show that sentiment analysis can explain human behaviour beyond economic outcomes.
Our meta-analysis shows that sentiment analysis can explain human behaviour beyond economic outcomes.
We hope this work sets the stage for a novel game theoretical approach that emphasizes the importance of language in human decisions.
arXiv Detail & Related papers (2024-03-13T20:21:20Z) - A Theory of LLM Sampling: Part Descriptive and Part Prescriptive [53.08398658452411]
Large Language Models (LLMs) are increasingly utilized in autonomous decision-making.
We show that this sampling behavior resembles that of human decision-making.
We show that this deviation of a sample from the statistical norm towards a prescriptive component consistently appears in concepts across diverse real-world domains.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Language Generation Models Can Cause Harm: So What Can We Do About It?
An Actionable Survey [50.58063811745676]
This work provides a survey of practical methods for addressing potential threats and societal harms from language generation models.
We draw on several prior works' of language model risks to present a structured overview of strategies for detecting and ameliorating different kinds of risks/harms of language generators.
arXiv Detail & Related papers (2022-10-14T10:43:39Z) - Differentiating Approach and Avoidance from Traditional Notions of
Sentiment in Economic Contexts [0.0]
Conviction Narrative Theory places Approach and Avoidance sentiment at the heart of real-world decision-making.
This research introduces new techniques to differentiate Approach and Avoidance from positive and negative sentiment on a fundamental level of meaning.
arXiv Detail & Related papers (2021-12-05T16:05:16Z) - Schr\"odinger's Tree -- On Syntax and Neural Language Models [10.296219074343785]
Language models have emerged as NLP's workhorse, displaying increasingly fluent generation capabilities.
We observe a lack of clarity across numerous dimensions, which influences the hypotheses that researchers form.
We outline the implications of the different types of research questions exhibited in studies on syntax.
arXiv Detail & Related papers (2021-10-17T18:25:23Z) - Ethical-Advice Taker: Do Language Models Understand Natural Language
Interventions? [62.74872383104381]
We investigate the effectiveness of natural language interventions for reading-comprehension systems.
We propose a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering (QA) model's unethical behavior.
arXiv Detail & Related papers (2021-06-02T20:57:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.