Psychologically-Inspired Causal Prompts
- URL: http://arxiv.org/abs/2305.01764v1
- Date: Tue, 2 May 2023 20:06:00 GMT
- Title: Psychologically-Inspired Causal Prompts
- Authors: Zhiheng Lyu, Zhijing Jin, Justus Mattern, Rada Mihalcea, Mrinmaya
Sachan, Bernhard Schoelkopf
- Abstract summary: We take sentiment classification as an example and look into the causal relations between the review (X) and sentiment (Y)
In this paper, we verbalize these three causal mechanisms of human psychological processes of sentiment classification into three different causal prompts.
- Score: 34.29555347562032
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: NLP datasets are richer than just input-output pairs; rather, they carry
causal relations between the input and output variables. In this work, we take
sentiment classification as an example and look into the causal relations
between the review (X) and sentiment (Y). As psychology studies show that
language can affect emotion, different psychological processes are evoked when
a person first makes a rating and then self-rationalizes their feeling in a
review (where the sentiment causes the review, i.e., Y -> X), versus first
describes their experience, and weighs the pros and cons to give a final rating
(where the review causes the sentiment, i.e., X -> Y ). Furthermore, it is also
a completely different psychological process if an annotator infers the
original rating of the user by theory of mind (ToM) (where the review causes
the rating, i.e., X -ToM-> Y ). In this paper, we verbalize these three causal
mechanisms of human psychological processes of sentiment classification into
three different causal prompts, and study (1) how differently they perform, and
(2) what nature of sentiment classification data leads to agreement or
diversity in the model responses elicited by the prompts. We suggest future
work raise awareness of different causal structures in NLP tasks. Our code and
data are at https://github.com/cogito233/psych-causal-prompt
Related papers
- Do LLMs Think Fast and Slow? A Causal Study on Sentiment Analysis [136.13390762317698]
Sentiment analysis (SA) aims to identify the sentiment expressed in a text, such as a product review.
Given a review and the sentiment associated with it, this work formulates SA as a combination of two tasks.
We classify a sample as C1 if its overall sentiment score approximates an average of all the sentence-level sentiments in the review, and C2 if the overall sentiment score approximates an average of the peak and end sentiments.
arXiv Detail & Related papers (2024-04-17T04:04:34Z) - How are Prompts Different in Terms of Sensitivity? [50.67313477651395]
We present a comprehensive prompt analysis based on the sensitivity of a function.
We use gradient-based saliency scores to empirically demonstrate how different prompts affect the relevance of input tokens to the output.
We introduce sensitivity-aware decoding which incorporates sensitivity estimation as a penalty term in the standard greedy decoding.
arXiv Detail & Related papers (2023-11-13T10:52:01Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - PsyMo: A Dataset for Estimating Self-Reported Psychological Traits from
Gait [4.831663144935878]
PsyMo is a novel, multi-purpose and multi-modal dataset for exploring psychological cues manifested in walking patterns.
We gathered walking sequences from 312 subjects in 7 different walking variations and 6 camera angles.
In conjunction with walking sequences, participants filled in 6 psychological questionnaires, totalling 17 psychometric attributes related to personality, self-esteem, fatigue, aggressiveness and mental health.
arXiv Detail & Related papers (2023-08-21T11:06:43Z) - CARE: Causality Reasoning for Empathetic Responses by Conditional Graph Generation [10.22893584383361]
We develop a new model, i.e., the Conditional Variational Graph Auto-Encoder (CVGAE), for the causality reasoning.
We name the whole framework as CARE, abbreviated for CAusality Reasoning for Empathetic conversation.
Experimental results indicate that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-01T03:45:26Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Recognizing Emotion Cause in Conversations [82.88647116730691]
Recognizing the cause behind emotions in text is a fundamental yet under-explored area of research in NLP.
We introduce the task of recognizing emotion cause in conversations with an accompanying dataset named RECCON.
arXiv Detail & Related papers (2020-12-22T03:51:35Z) - Appraisal Theories for Emotion Classification in Text [13.743991035051714]
We show that automatic classification approaches need to learn properties of events as latent variables.
We propose to make such interpretations explicit, following theories of cognitive appraisal of events.
Our results show that high quality appraisal dimension assignments in event descriptions lead to an improvement in the classification of discrete emotion categories.
arXiv Detail & Related papers (2020-03-31T12:43:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.