Socratic Reasoning Improves Positive Text Rewriting
- URL: http://arxiv.org/abs/2403.03029v1
- Date: Tue, 5 Mar 2024 15:05:06 GMT
- Title: Socratic Reasoning Improves Positive Text Rewriting
- Authors: Anmol Goel, Nico Daheim, Iryna Gurevych
- Abstract summary: textscSocraticReframe uses a sequence of question-answer pairs to rationalize the thought rewriting process.
We show that Socratic rationales significantly improve positive text rewriting according to both automatic and human evaluations guided by criteria from psychotherapy research.
- Score: 60.56097569286398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reframing a negative into a positive thought is at the crux of several
cognitive approaches to mental health and psychotherapy that could be made more
accessible by large language model-based solutions. Such reframing is typically
non-trivial and requires multiple rationalization steps to uncover the
underlying issue of a negative thought and transform it to be more positive.
However, this rationalization process is currently neglected by both datasets
and models which reframe thoughts in one step. In this work, we address this
gap by augmenting open-source datasets for positive text rewriting with
synthetically-generated Socratic rationales using a novel framework called
\textsc{SocraticReframe}. \textsc{SocraticReframe} uses a sequence of
question-answer pairs to rationalize the thought rewriting process. We show
that such Socratic rationales significantly improve positive text rewriting for
different open-source LLMs according to both automatic and human evaluations
guided by criteria from psychotherapy research.
Related papers
- Thought-Path Contrastive Learning via Premise-Oriented Data Augmentation for Logical Reading Comprehension [9.67774998354062]
Previous research has primarily focused on enhancing logical reasoning capabilities through Chain-of-Thought (CoT) or data augmentation.
We propose a Premise-Oriented Data Augmentation (PODA) framework to generate CoT rationales including analyses for both correct and incorrect options.
We also introduce a novel thought-path contrastive learning method that compares reasoning paths between the original and counterfactual samples.
arXiv Detail & Related papers (2024-09-22T15:44:43Z) - Promoting Constructive Deliberation: Reframing for Receptiveness [5.4346288442609945]
We propose automatic reframing of disagreeing responses to signal receptiveness to a preceding comment.
We identify six strategies for reframing. We automatically reframe replies to comments according to each strategy, using a Reddit dataset.
We find that the replies generated with our framework are perceived to be significantly more receptive than the original replies and a generic receptiveness baseline.
arXiv Detail & Related papers (2024-05-23T21:35:22Z) - What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models [50.97705264224828]
We propose Counterfactual Inception, a novel method that implants counterfactual thinking into Large Multi-modal Models.
We aim for the models to engage with and generate responses that span a wider contextual scene understanding.
Comprehensive analyses across various LMMs, including both open-source and proprietary models, corroborate that counterfactual thinking significantly reduces hallucination.
arXiv Detail & Related papers (2024-03-20T11:27:20Z) - Generating Chain-of-Thoughts with a Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought [70.30423016640749]
Chain-of-thoughts (CoT) methods were proposed to guide large language models to reason step-by-step, enabling problem solving from simple to complex.
The evaluation from the large language model (LLMs) is typically noisy and unreliable, potentially misleading the generation process in selecting promising intermediate thoughts.
In this paper, motivated by Vapnik's principle, we use pairwise-comparison evaluation instead of point-wise scoring to search for promising intermediate thoughts.
arXiv Detail & Related papers (2024-02-10T09:51:03Z) - Negotiated Reasoning: On Provably Addressing Relative
Over-Generalization [49.5896371203566]
Over-generalization is a thorny issue in cognitive science, where people may become overly cautious due to past experiences.
Agents in multi-agent reinforcement learning (MARL) also have been found to suffer relative over-generalization (RO) as people do and stuck to sub-optimal cooperation.
Recent methods have shown that assigning reasoning ability to agents can mitigate RO algorithmically and empirically, but there has been a lack of theoretical understanding of RO.
arXiv Detail & Related papers (2023-06-08T16:57:12Z) - Cognitive Reframing of Negative Thoughts through Human-Language Model
Interaction [7.683627834905736]
We conduct a human-centered study of how language models may assist people in reframing negative thoughts.
Based on literature, we define a framework of seven linguistic attributes that can be used to reframe a thought.
We collect a dataset of 600 situations, thoughts and reframes from practitioners and use it to train a retrieval-enhanced in-context learning model.
arXiv Detail & Related papers (2023-05-04T00:12:52Z) - NapSS: Paragraph-level Medical Text Simplification via Narrative
Prompting and Sentence-matching Summarization [46.772517928718216]
We propose a summarize-then-simplify two-stage strategy, which we call NapSS.
NapSS identifies the relevant content to simplify while ensuring that the original narrative flow is preserved.
Our model achieves significantly better than the seq2seq baseline on an English medical corpus.
arXiv Detail & Related papers (2023-02-11T02:20:25Z) - Interlock-Free Multi-Aspect Rationalization for Text Classification [33.33452117387646]
We show that we address the interlocking problem in the multi-aspect setting.
We propose a multi-stage training method incorporating an additional self-supervised contrastive loss.
Empirical results on the beer review dataset show that our method improves significantly the rationalization performance.
arXiv Detail & Related papers (2022-05-13T16:38:38Z) - Improving Response Quality with Backward Reasoning in Open-domain
Dialogue Systems [53.160025961101354]
We propose to train the generation model in a bidirectional manner by adding a backward reasoning step to the vanilla encoder-decoder training.
The proposed backward reasoning step pushes the model to produce more informative and coherent content.
Our method can improve response quality without introducing side information.
arXiv Detail & Related papers (2021-04-30T20:38:27Z) - Counterfactual Off-Policy Training for Neural Response Generation [94.76649147381232]
We propose to explore potential responses by counterfactual reasoning.
Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space.
An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model.
arXiv Detail & Related papers (2020-04-29T22:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.