What's Next in Affective Modeling? Large Language Models
- URL: http://arxiv.org/abs/2310.18322v1
- Date: Tue, 3 Oct 2023 16:39:20 GMT
- Title: What's Next in Affective Modeling? Large Language Models
- Authors: Nutchanon Yongsatianchot, Tobias Thejll-Madsen, Stacy Marsella
- Abstract summary: GPT-4 performs well across multiple emotion tasks.
It can distinguish emotion theories and come up with emotional stories.
We suggest that LLMs could play an important role in affective modeling.
- Score: 3.0902630634005797
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLM) have recently been shown to perform well at
various tasks from language understanding, reasoning, storytelling, and
information search to theory of mind. In an extension of this work, we explore
the ability of GPT-4 to solve tasks related to emotion prediction. GPT-4
performs well across multiple emotion tasks; it can distinguish emotion
theories and come up with emotional stories. We show that by prompting GPT-4 to
identify key factors of an emotional experience, it is able to manipulate the
emotional intensity of its own stories. Furthermore, we explore GPT-4's ability
on reverse appraisals by asking it to predict either the goal, belief, or
emotion of a person using the other two. In general, GPT-4 can make the correct
inferences. We suggest that LLMs could play an important role in affective
modeling; however, they will not fully replace works that attempt to model the
mechanisms underlying emotion-related processes.
Related papers
- ECR-Chain: Advancing Generative Language Models to Better Emotion-Cause Reasoners through Reasoning Chains [61.50113532215864]
Causal Emotion Entailment (CEE) aims to identify the causal utterances in a conversation that stimulate the emotions expressed in a target utterance.
Current works in CEE mainly focus on modeling semantic and emotional interactions in conversations.
We introduce a step-by-step reasoning method, Emotion-Cause Reasoning Chain (ECR-Chain), to infer the stimulus from the target emotional expressions in conversations.
arXiv Detail & Related papers (2024-05-17T15:45:08Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - How FaR Are Large Language Models From Agents with Theory-of-Mind? [69.41586417697732]
We propose a new evaluation paradigm for large language models (LLMs): Thinking for Doing (T4D)
T4D requires models to connect inferences about others' mental states to actions in social scenarios.
We introduce a zero-shot prompting framework, Foresee and Reflect (FaR), which provides a reasoning structure that encourages LLMs to anticipate future challenges.
arXiv Detail & Related papers (2023-10-04T06:47:58Z) - Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench [83.41621219298489]
We propose to evaluate the empathy ability of Large Language Models (LLMs)
We collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study.
We conduct a human evaluation involving more than 1,200 subjects worldwide.
arXiv Detail & Related papers (2023-08-07T15:18:30Z) - Is GPT a Computational Model of Emotion? Detailed Analysis [2.0001091112545066]
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective.
It shows that GPT's predictions align significantly with human-provided appraisals and emotional labels.
However, GPT faces difficulties predicting emotion intensity and coping responses.
arXiv Detail & Related papers (2023-07-25T19:34:44Z) - Large Language Models Understand and Can be Enhanced by Emotional
Stimuli [53.53886609012119]
We take the first step towards exploring the ability of Large Language Models to understand emotional stimuli.
Our experiments show that LLMs have a grasp of emotional intelligence, and their performance can be improved with emotional prompts.
Our human study results demonstrate that EmotionPrompt significantly boosts the performance of generative tasks.
arXiv Detail & Related papers (2023-07-14T00:57:12Z) - Does Conceptual Representation Require Embodiment? Insights From Large
Language Models [9.390117546307042]
We compare representations of 4,442 lexical concepts between humans and ChatGPTs (GPT-3.5 and GPT-4)
We identify two main findings: 1) Both models strongly align with human representations in non-sensorimotor domains but lag in sensory and motor areas, with GPT-4 outperforming GPT-3.5; 2) GPT-4's gains are associated with its additional visual learning, which also appears to benefit related dimensions like haptics and imageability.
arXiv Detail & Related papers (2023-05-30T15:06:28Z) - Sparks of Artificial General Intelligence: Early experiments with GPT-4 [66.1188263570629]
GPT-4, developed by OpenAI, was trained using an unprecedented scale of compute and data.
We demonstrate that GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more.
We believe GPT-4 could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
arXiv Detail & Related papers (2023-03-22T16:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.