Opponent Modeling in Negotiation Dialogues by Related Data Adaptation
- URL: http://arxiv.org/abs/2205.00344v2
- Date: Tue, 3 May 2022 15:39:30 GMT
- Title: Opponent Modeling in Negotiation Dialogues by Related Data Adaptation
- Authors: Kushal Chawla, Gale M. Lucas, Jonathan May, Jonathan Gratch
- Abstract summary: We propose a ranker for identifying priorities from negotiation dialogues.
The model takes in a partial dialogue as input and predicts the priority order of the opponent.
We show the utility of our proposed approach through extensive experiments based on two dialogue datasets.
- Score: 20.505272677769355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Opponent modeling is the task of inferring another party's mental state
within the context of social interactions. In a multi-issue negotiation, it
involves inferring the relative importance that the opponent assigns to each
issue under discussion, which is crucial for finding high-value deals. A
practical model for this task needs to infer these priorities of the opponent
on the fly based on partial dialogues as input, without needing additional
annotations for training. In this work, we propose a ranker for identifying
these priorities from negotiation dialogues. The model takes in a partial
dialogue as input and predicts the priority order of the opponent. We further
devise ways to adapt related data sources for this task to provide more
explicit supervision for incorporating the opponent's preferences and offers,
as a proxy to relying on granular utterance-level annotations. We show the
utility of our proposed approach through extensive experiments based on two
dialogue datasets. We find that the proposed data adaptations lead to strong
performance in zero-shot and few-shot scenarios. Moreover, they allow the model
to perform better than baselines while accessing fewer utterances from the
opponent. We release our code to support future work in this direction.
Related papers
- Pre-training Multi-party Dialogue Models with Latent Discourse Inference [85.9683181507206]
We pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying.
To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model.
arXiv Detail & Related papers (2023-05-24T14:06:27Z) - Learning to Memorize Entailment and Discourse Relations for
Persona-Consistent Dialogues [8.652711997920463]
Existing works have improved the performance of dialogue systems by intentionally learning interlocutor personas with sophisticated network structures.
This study proposes a method of learning to memorize entailment and discourse relations for persona-consistent dialogue tasks.
arXiv Detail & Related papers (2023-01-12T08:37:00Z) - He Said, She Said: Style Transfer for Shifting the Perspective of
Dialogues [75.58367095888914]
We define a new style transfer task: perspective shift, which reframes a dialogue from informal first person to a formal third person rephrasing of the text.
As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models.
arXiv Detail & Related papers (2022-10-27T14:16:07Z) - Conversation Disentanglement with Bi-Level Contrastive Learning [26.707584899718288]
Existing methods have two main drawbacks. First, they overemphasize pairwise utterance relations but pay inadequate attention to the utterance-to-context relation modeling.
We propose a general disentangle model based on bi-level contrastive learning. It brings closer utterances in the same session while encourages each utterance to be near its clustered session prototypes in the representation space.
arXiv Detail & Related papers (2022-10-27T08:41:46Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Improving Unsupervised Dialogue Topic Segmentation with Utterance-Pair
Coherence Scoring [8.31009800792799]
We present a strategy to generate a training corpus for utterance-pair coherence scoring.
Then, we train a BERT-based neural utterance-pair coherence model with the obtained training corpus.
Finally, such model is used to measure the topical relevance between utterances, acting as the basis of the segmentation inference.
arXiv Detail & Related papers (2021-06-12T08:49:20Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.