Application of Multiple Chain-of-Thought in Contrastive Reasoning for Implicit Sentiment Analysis
- URL: http://arxiv.org/abs/2503.07140v1
- Date: Mon, 10 Mar 2025 10:10:50 GMT
- Title: Application of Multiple Chain-of-Thought in Contrastive Reasoning for Implicit Sentiment Analysis
- Authors: Liwei Yang, Xinying Wang, Xiaotang Zhou, Zhengchao Wu, Ningning Tan,
- Abstract summary: Implicit sentiment analysis aims to uncover emotions that are subtly expressed, often obscured by ambiguity and figurative language.<n>We propose a novel Dual Reverse Chain Reasoning framework to enhance the performance of implicit sentiment analysis.<n>We also introduce a Triple Reverse Chain Reasoning framework to address the limitations of random hypotheses.
- Score: 1.9472869221587836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit sentiment analysis aims to uncover emotions that are subtly expressed, often obscured by ambiguity and figurative language. To accomplish this task, large language models and multi-step reasoning are needed to identify those sentiments that are not explicitly stated. In this study, we propose a novel Dual Reverse Chain Reasoning (DRCR) framework to enhance the performance of implicit sentiment analysis. Inspired by deductive reasoning, the framework consists of three key steps: 1) hypothesize an emotional polarity and derive a reasoning process, 2) negate the initial hypothesis and derive a new reasoning process, and 3) contrast the two reasoning paths to deduce the final sentiment polarity. Building on this, we also introduce a Triple Reverse Chain Reasoning (TRCR) framework to address the limitations of random hypotheses. Both methods combine contrastive mechanisms and multi-step reasoning, significantly improving the accuracy of implicit sentiment classification. Experimental results demonstrate that both approaches outperform existing methods across various model scales, achieving state-of-the-art performance. This validates the effectiveness of combining contrastive reasoning and multi-step reasoning for implicit sentiment analysis.
Related papers
- Unveiling the Magic of Code Reasoning through Hypothesis Decomposition and Amendment [54.62926010621013]
We introduce a novel task, code reasoning, to provide a new perspective for the reasoning abilities of large language models.<n>We summarize three meta-benchmarks based on established forms of logical reasoning, and instantiate these into eight specific benchmark tasks.<n>We present a new pathway exploration pipeline inspired by human intricate problem-solving methods.
arXiv Detail & Related papers (2025-02-17T10:39:58Z) - PanoSent: A Panoptic Sextuple Extraction Benchmark for Multimodal Conversational Aspect-based Sentiment Analysis [74.41260927676747]
This paper bridges the gaps by introducing a multimodal conversational Sentiment Analysis (ABSA)
To benchmark the tasks, we construct PanoSent, a dataset annotated both manually and automatically, featuring high quality, large scale, multimodality, multilingualism, multi-scenarios, and covering both implicit and explicit sentiment elements.
To effectively address the tasks, we devise a novel Chain-of-Sentiment reasoning framework, together with a novel multimodal large language model (namely Sentica) and a paraphrase-based verification mechanism.
arXiv Detail & Related papers (2024-08-18T13:51:01Z) - Rethinking harmless refusals when fine-tuning foundation models [0.8571111167616167]
We investigate the degree to which fine-tuning in Large Language Models (LLMs) effectively mitigates versus merely conceals undesirable behavior.
We identify a pervasive phenomenon we term emphreason-based deception, where models either stop producing reasoning traces or produce seemingly ethical reasoning traces that belie the unethical nature of their final outputs.
arXiv Detail & Related papers (2024-06-27T22:08:22Z) - Contrastive Chain-of-Thought Prompting [74.10511560147293]
We propose contrastive chain of thought to enhance language model reasoning.
Compared to the conventional chain of thought, our approach provides both valid and invalid reasoning demonstrations.
Our experiments on reasoning benchmarks demonstrate that contrastive chain of thought can serve as a general enhancement of chain-of-thought prompting.
arXiv Detail & Related papers (2023-11-15T18:54:01Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Towards Trustworthy Explanation: On Causal Rationalization [9.48539398357156]
We propose a new model of rationalization based on two causal desiderata, non-spuriousness and efficiency.
The superior performance of the proposed causal rationalization is demonstrated on real-world review and medical datasets.
arXiv Detail & Related papers (2023-06-25T03:34:06Z) - Causal Intervention Improves Implicit Sentiment Analysis [67.43379729099121]
We propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV)
We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task.
Then, we introduce an instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment.
arXiv Detail & Related papers (2022-08-19T13:17:57Z) - Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment
Analysis [56.84237932819403]
This paper aims to estimate and mitigate the bad effect of textual modality for strong OOD generalization.
Inspired by this, we devise a model-agnostic counterfactual framework for multimodal sentiment analysis.
arXiv Detail & Related papers (2022-07-24T03:57:40Z) - An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented
Dialogue Generation [21.106357884651363]
We introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.
We propose a two-phase approach that consists of a hypothesis generator and a reasoner.
The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations.
arXiv Detail & Related papers (2022-03-11T10:44:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.