Promoting Constructive Deliberation: Reframing for Receptiveness
- URL: http://arxiv.org/abs/2405.15067v3
- Date: Fri, 04 Oct 2024 21:55:31 GMT
- Title: Promoting Constructive Deliberation: Reframing for Receptiveness
- Authors: Gauri Kambhatla, Matthew Lease, Ashwin Rajadesingan,
- Abstract summary: We propose automatic reframing of disagreeing responses to signal receptiveness to a preceding comment.
We identify six strategies for reframing. We automatically reframe replies to comments according to each strategy, using a Reddit dataset.
We find that the replies generated with our framework are perceived to be significantly more receptive than the original replies and a generic receptiveness baseline.
- Score: 5.4346288442609945
- License:
- Abstract: To promote constructive discussion of controversial topics online, we propose automatic reframing of disagreeing responses to signal receptiveness to a preceding comment. Drawing on research from psychology, communications, and linguistics, we identify six strategies for reframing. We automatically reframe replies to comments according to each strategy, using a Reddit dataset. Through human-centered experiments, we find that the replies generated with our framework are perceived to be significantly more receptive than the original replies and a generic receptiveness baseline. We illustrate how transforming receptiveness, a particular social science construct, into a computational framework, can make LLM generations more aligned with human perceptions. We analyze and discuss the implications of our results, and highlight how a tool based on our framework might be used for more teachable and creative content moderation.
Related papers
- "Is ChatGPT a Better Explainer than My Professor?": Evaluating the Explanation Capabilities of LLMs in Conversation Compared to a Human Baseline [23.81489190082685]
Explanations form the foundation of knowledge sharing and build upon communication principles, social dynamics, and learning theories.
Our research leverages previous work on explanatory acts, a framework for understanding the different strategies that explainers and explainees employ in a conversation to both explain, understand, and engage with the other party.
With the rise of generative AI in the past year, we hope to better understand the capabilities of Large Language Models (LLMs) and how they can augment expert explainer's capabilities in conversational settings.
arXiv Detail & Related papers (2024-06-26T17:33:51Z) - Joint Learning of Context and Feedback Embeddings in Spoken Dialogue [3.8673630752805446]
We investigate the possibility of embedding short dialogue contexts and feedback responses in the same representation space using a contrastive learning objective.
Our results show that the model outperforms humans given the same ranking task and that the learned embeddings carry information about the conversational function of feedback responses.
arXiv Detail & Related papers (2024-06-11T14:22:37Z) - Socratic Reasoning Improves Positive Text Rewriting [60.56097569286398]
textscSocraticReframe uses a sequence of question-answer pairs to rationalize the thought rewriting process.
We show that Socratic rationales significantly improve positive text rewriting according to both automatic and human evaluations guided by criteria from psychotherapy research.
arXiv Detail & Related papers (2024-03-05T15:05:06Z) - Reasoning in Conversation: Solving Subjective Tasks through Dialogue
Simulation for Large Language Models [56.93074140619464]
We propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.
The motivation of RiC is to mine useful contextual information by simulating dialogues instead of supplying chain-of-thought style rationales.
We evaluate both API-based and open-source LLMs including GPT-4, ChatGPT, and OpenChat across twelve tasks.
arXiv Detail & Related papers (2024-02-27T05:37:10Z) - Leveraging Implicit Feedback from Deployment Data in Dialogue [83.02878726357523]
We study improving social conversational agents by learning from natural dialogue between users and a deployed model.
We leverage signals like user response length, sentiment and reaction of the future human utterances in the collected dialogue episodes.
arXiv Detail & Related papers (2023-07-26T11:34:53Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Reflexion: Language Agents with Verbal Reinforcement Learning [44.85337947858337]
Reflexion is a novel framework to reinforce language agents not by updating weights, but through linguistic feedback.
It is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals.
For example, Reflexion achieves a 91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80%.
arXiv Detail & Related papers (2023-03-20T18:08:50Z) - Learning to Express in Knowledge-Grounded Conversation [62.338124154016825]
We consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part.
We propose a segmentation-based generation model and optimize the model by a variational approach to discover the underlying pattern of knowledge expression in a response.
arXiv Detail & Related papers (2022-04-12T13:43:47Z) - RESPER: Computationally Modelling Resisting Strategies in Persuasive
Conversations [0.7505101297221454]
We propose a generalised framework for identifying resisting strategies in persuasive conversations.
Our experiments reveal the asymmetry of power roles in non-collaborative goal-directed conversations.
We also investigate the role of different resisting strategies on the conversation outcome.
arXiv Detail & Related papers (2021-01-26T03:44:17Z) - EnsembleGAN: Adversarial Learning for Retrieval-Generation Ensemble
Model on Short-Text Conversation [37.80290058812499]
ensembleGAN is an adversarial learning framework for enhancing a retrieval-generation ensemble model in open-domain conversation scenario.
It consists of a language-model-like generator, a ranker generator, and one ranker discriminator.
arXiv Detail & Related papers (2020-04-30T05:59:12Z) - Counterfactual Off-Policy Training for Neural Response Generation [94.76649147381232]
We propose to explore potential responses by counterfactual reasoning.
Training on the counterfactual responses under the adversarial learning framework helps to explore the high-reward area of the potential response space.
An empirical study on the DailyDialog dataset shows that our approach significantly outperforms the HRED model.
arXiv Detail & Related papers (2020-04-29T22:46:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.