Do LLMs Think Fast and Slow? A Causal Study on Sentiment Analysis
- URL: http://arxiv.org/abs/2404.11055v2
- Date: Sun, 27 Oct 2024 23:50:25 GMT
- Title: Do LLMs Think Fast and Slow? A Causal Study on Sentiment Analysis
- Authors: Zhiheng Lyu, Zhijing Jin, Fernando Gonzalez, Rada Mihalcea, Bernhard Schölkopf, Mrinmaya Sachan,
- Abstract summary: Sentiment analysis (SA) aims to identify the sentiment expressed in a text, such as a product review.
Given a review and the sentiment associated with it, this work formulates SA as a combination of two tasks.
We classify a sample as C1 if its overall sentiment score approximates an average of all the sentence-level sentiments in the review, and C2 if the overall sentiment score approximates an average of the peak and end sentiments.
- Score: 136.13390762317698
- License:
- Abstract: Sentiment analysis (SA) aims to identify the sentiment expressed in a text, such as a product review. Given a review and the sentiment associated with it, this work formulates SA as a combination of two tasks: (1) a causal discovery task that distinguishes whether a review "primes" the sentiment (Causal Hypothesis C1), or the sentiment "primes" the review (Causal Hypothesis C2); and (2) the traditional prediction task to model the sentiment using the review as input. Using the peak-end rule in psychology, we classify a sample as C1 if its overall sentiment score approximates an average of all the sentence-level sentiments in the review, and C2 if the overall sentiment score approximates an average of the peak and end sentiments. For the prediction task, we use the discovered causal mechanisms behind the samples to improve LLM performance by proposing causal prompts that give the models an inductive bias of the underlying causal graph, leading to substantial improvements by up to 32.13 F1 points on zero-shot five-class SA. Our code is at https://github.com/cogito233/causal-sa
Related papers
- How are Prompts Different in Terms of Sensitivity? [50.67313477651395]
We present a comprehensive prompt analysis based on the sensitivity of a function.
We use gradient-based saliency scores to empirically demonstrate how different prompts affect the relevance of input tokens to the output.
We introduce sensitivity-aware decoding which incorporates sensitivity estimation as a penalty term in the standard greedy decoding.
arXiv Detail & Related papers (2023-11-13T10:52:01Z) - Psychologically-Inspired Causal Prompts [34.29555347562032]
We take sentiment classification as an example and look into the causal relations between the review (X) and sentiment (Y)
In this paper, we verbalize these three causal mechanisms of human psychological processes of sentiment classification into three different causal prompts.
arXiv Detail & Related papers (2023-05-02T20:06:00Z) - Causal Intervention Improves Implicit Sentiment Analysis [67.43379729099121]
We propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV)
We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task.
Then, we introduce an instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment.
arXiv Detail & Related papers (2022-08-19T13:17:57Z) - Spatio-Temporal Graph Representation Learning for Fraudster Group
Detection [50.779498955162644]
Companies may hire fraudster groups to write fake reviews to either demote competitors or promote their own businesses.
To detect such groups, a common model is to represent fraudster groups' static networks.
We propose to first capitalize on the effectiveness of the HIN-RNN in both reviewers' representation learning.
arXiv Detail & Related papers (2022-01-07T08:01:38Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - Causal Effects of Linguistic Properties [41.65859219291606]
We consider the problem of using observational data to estimate the causal effects of linguistic properties.
We introduce TextCause, an algorithm for estimating causal effects of linguistic properties.
We show that the proposed method outperforms related approaches when estimating the effect of Amazon review sentiment.
arXiv Detail & Related papers (2020-10-24T15:43:37Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - A Unified Dual-view Model for Review Summarization and Sentiment
Classification with Inconsistency Loss [51.448615489097236]
Acquiring accurate summarization and sentiment from user reviews is an essential component of modern e-commerce platforms.
We propose a novel dual-view model that jointly improves the performance of these two tasks.
Experiment results on four real-world datasets from different domains demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2020-06-02T13:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.