Rehearsal: Simulating Conflict to Teach Conflict Resolution
- URL: http://arxiv.org/abs/2309.12309v2
- Date: Thu, 29 Feb 2024 06:38:27 GMT
- Title: Rehearsal: Simulating Conflict to Teach Conflict Resolution
- Authors: Omar Shaikh, Valentino Chai, Michele J. Gelfand, Diyi Yang, Michael S.
Bernstein
- Abstract summary: Rehearsal is a system that allows users to rehearse conflicts with a believable simulated interlocutor.
Users can utilize Rehearsal to practice handling a variety of predefined conflict scenarios.
Rehearsal uses IRP to generate utterances grounded in conflict resolution theory.
- Score: 54.32934135393982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpersonal conflict is an uncomfortable but unavoidable fact of life.
Navigating conflict successfully is a skill -- one that can be learned through
deliberate practice -- but few have access to effective training or feedback.
To expand this access, we introduce Rehearsal, a system that allows users to
rehearse conflicts with a believable simulated interlocutor, explore
counterfactual "what if?" scenarios to identify alternative conversational
paths, and learn through feedback on how and when to apply specific conflict
strategies. Users can utilize Rehearsal to practice handling a variety of
predefined conflict scenarios, from office disputes to relationship issues, or
they can choose to create their own setting. To enable Rehearsal, we develop
IRP prompting, a method of conditioning output of a large language model on the
influential Interest-Rights-Power (IRP) theory from conflict resolution.
Rehearsal uses IRP to generate utterances grounded in conflict resolution
theory, guiding users towards counterfactual conflict resolution strategies
that help de-escalate difficult conversations. In a between-subjects
evaluation, 40 participants engaged in an actual conflict with a confederate
after training. Compared to a control group with lecture material covering the
same IRP theory, participants with simulated training from Rehearsal
significantly improved their performance in the unaided conflict: they reduced
their use of escalating competitive strategies by an average of 67%, while
doubling their use of cooperative strategies. Overall, Rehearsal highlights the
potential effectiveness of language models as tools for learning and practicing
interpersonal skills.
Related papers
- Conflict-Aware Adversarial Training [29.804312958830636]
We argue that the weighted-average method does not provide the best tradeoff for the standard performance and adversarial robustness.
We propose a new trade-off paradigm for adversarial training with a conflict-aware factor for the convex combination of standard and adversarial loss, named textbfConflict-Aware Adrial Training(CA-AT)
arXiv Detail & Related papers (2024-10-21T23:44:03Z) - Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing [54.098203568194606]
We develop an evaluation benchmark that includes contextual knowledge conflicting requests, parametric knowledge conflicting requests, and non-conflicting requests.
We find that most RPAs behave significant performance gaps toward different conflict requests.
We introduce a lightweight representation editing approach that conveniently shifts conflicting requests to the rejection region.
arXiv Detail & Related papers (2024-09-25T13:18:12Z) - Discerning and Resolving Knowledge Conflicts through Adaptive Decoding with Contextual Information-Entropy Constraint [20.543282448771336]
We propose an adaptive decoding method to discern whether the knowledge conflicts occur and resolve them.
Experiments show that COIECD exhibits strong performance and robustness over knowledge conflicts in realistic datasets.
arXiv Detail & Related papers (2024-02-19T07:10:30Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Understanding Interpersonal Conflict Types and their Impact on
Perception Classification [7.907976678407914]
We use a novel annotation scheme and release a new dataset of situations and conflict aspect annotations.
We then build a classifier to predict whether someone will perceive the actions of one individual as right or wrong in a given situation.
Our findings have important implications for understanding conflict and social norms.
arXiv Detail & Related papers (2022-08-18T10:39:35Z) - Modeling Non-Cooperative Dialogue: Theoretical and Empirical Insights [11.462075538526703]
We investigate the ability of agents to identify non-cooperative interlocutors while completing a concurrent visual-dialogue task.
We use the tools of learning theory to develop a theoretical model for identifying non-cooperative interlocutors and apply this theory to analyze different communication strategies.
arXiv Detail & Related papers (2022-07-15T02:08:41Z) - Diluted Near-Optimal Expert Demonstrations for Guiding Dialogue
Stochastic Policy Optimisation [0.716879432974126]
A learning dialogue agent can infer its behaviour from human-to-human or human-machine conversations.
One solution to speedup the learning process is to guide the agent's exploration with the help of an expert.
We present several imitation learning strategies for dialogue policy where the guiding expert is a near-optimal handcrafted policy.
arXiv Detail & Related papers (2020-11-25T15:00:36Z) - Guided Dialog Policy Learning without Adversarial Learning in the Loop [103.20723982440788]
A number of adversarial learning methods have been proposed to learn the reward function together with the dialogue policy.
We propose to decompose the adversarial training into two steps.
First, we train the discriminator with an auxiliary dialogue generator and then incorporate a derived reward model into a common RL method to guide the dialogue policy learning.
arXiv Detail & Related papers (2020-04-07T11:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.