Exploring Early Prediction of Buyer-Seller Negotiation Outcomes
- URL: http://arxiv.org/abs/2004.02363v2
- Date: Fri, 26 Feb 2021 03:17:36 GMT
- Title: Exploring Early Prediction of Buyer-Seller Negotiation Outcomes
- Authors: Kushal Chawla, Gale Lucas, Jonathan May, Jonathan Gratch
- Abstract summary: We explore a novel task of early prediction of buyer-seller negotiation outcomes, by varying the fraction of utterances that the model can access.
We explore the feasibility of early prediction by using traditional feature-based methods, as well as by incorporating the non-linguistic task context into a pretrained language model.
- Score: 19.35826558501076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Agents that negotiate with humans find broad applications in pedagogy and
conversational AI. Most efforts in human-agent negotiations rely on restrictive
menu-driven interfaces for communication. To advance the research in
language-based negotiation systems, we explore a novel task of early prediction
of buyer-seller negotiation outcomes, by varying the fraction of utterances
that the model can access. We explore the feasibility of early prediction by
using traditional feature-based methods, as well as by incorporating the
non-linguistic task context into a pretrained language model using sentence
templates. We further quantify the extent to which linguistic features help in
making better predictions apart from the task-specific price information.
Finally, probing the pretrained model helps us to identify specific features,
such as trust and agreement, that contribute to the prediction performance.
Related papers
- Beyond Text: Leveraging Multi-Task Learning and Cognitive Appraisal Theory for Post-Purchase Intention Analysis [10.014248704653]
This study evaluates multi-task learning frameworks grounded in Cognitive Appraisal Theory to predict user behavior.
Our experiments show that users' language and traits improve predictions above and beyond models predicting only from text.
arXiv Detail & Related papers (2024-07-11T04:57:52Z) - Language of Bargaining [60.218128617765046]
We build a novel dataset for studying how the use of language shapes bilateral bargaining.
Our work also reveals linguistic signals that are predictive of negotiation outcomes.
arXiv Detail & Related papers (2023-06-12T13:52:01Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - Probing via Prompting [71.7904179689271]
This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
arXiv Detail & Related papers (2022-07-04T22:14:40Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Few-shot Subgoal Planning with Language Models [58.11102061150875]
We show that language priors encoded in pre-trained language models allow us to infer fine-grained subgoal sequences.
In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences without any fine-tuning.
arXiv Detail & Related papers (2022-05-28T01:03:30Z) - Opponent Modeling in Negotiation Dialogues by Related Data Adaptation [20.505272677769355]
We propose a ranker for identifying priorities from negotiation dialogues.
The model takes in a partial dialogue as input and predicts the priority order of the opponent.
We show the utility of our proposed approach through extensive experiments based on two dialogue datasets.
arXiv Detail & Related papers (2022-04-30T21:11:41Z) - Towards Building Economic Models of Conversational Search [17.732575878508566]
We develop two economic models of conversational search based on patterns previously observed during search sessions.
Our models show that the amount of feedback given/requested depends on its efficiency at improving the initial or subsequent query.
arXiv Detail & Related papers (2022-01-21T15:20:51Z) - Augmenting BERT-style Models with Predictive Coding to Improve
Discourse-level Representations [20.855686009404703]
We propose to use ideas from predictive coding theory to augment BERT-style language models with a mechanism that allows them to learn discourse-level representations.
Our proposed approach is able to predict future sentences using explicit top-down connections that operate at the intermediate layers of the network.
arXiv Detail & Related papers (2021-09-10T00:45:28Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - An Empirical Investigation of Pre-Trained Transformer Language Models
for Open-Domain Dialogue Generation [23.343006562849126]
We present an empirical investigation of pre-trained Transformer-based auto-regressive language models for the task of open-domain dialogue generation.
Training paradigm of pre-training and fine-tuning is employed to conduct learning.
Experiments are conducted on the typical single-turn and multi-turn dialogue corpora such as Weibo, Douban, Reddit, DailyDialog, and Persona-Chat.
arXiv Detail & Related papers (2020-03-09T15:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.