Modeling Non-Cooperative Dialogue: Theoretical and Empirical Insights
- URL: http://arxiv.org/abs/2207.07255v1
- Date: Fri, 15 Jul 2022 02:08:41 GMT
- Title: Modeling Non-Cooperative Dialogue: Theoretical and Empirical Insights
- Authors: Anthony Sicilia, Tristan Maidment, Pat Healy, and Malihe Alikhani
- Abstract summary: We investigate the ability of agents to identify non-cooperative interlocutors while completing a concurrent visual-dialogue task.
We use the tools of learning theory to develop a theoretical model for identifying non-cooperative interlocutors and apply this theory to analyze different communication strategies.
- Score: 11.462075538526703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Investigating cooperativity of interlocutors is central in studying
pragmatics of dialogue. Models of conversation that only assume cooperative
agents fail to explain the dynamics of strategic conversations. Thus, we
investigate the ability of agents to identify non-cooperative interlocutors
while completing a concurrent visual-dialogue task. Within this novel setting,
we study the optimality of communication strategies for achieving this
multi-task objective. We use the tools of learning theory to develop a
theoretical model for identifying non-cooperative interlocutors and apply this
theory to analyze different communication strategies. We also introduce a
corpus of non-cooperative conversations about images in the GuessWhat?! dataset
proposed by De Vries et al. (2017). We use reinforcement learning to implement
multiple communication strategies in this context and find empirical results
validate our theory.
Related papers
- Rapport-Driven Virtual Agent: Rapport Building Dialogue Strategy for Improving User Experience at First Meeting [3.059886686838972]
This study aims to establish human-agent rapport through small talk by using a rapport-building strategy.
We implemented this strategy for the virtual agents based on dialogue strategies by prompting a large language model (LLM)
arXiv Detail & Related papers (2024-06-14T08:47:15Z) - Investigating Reinforcement Learning for Communication Strategies in a
Task-Initiative Setting [8.680676599607123]
We analyze the trade-offs between initial presentation and subsequent followup as a function of user clarification strategy.
We find surprising advantages to coherence-based representations of dialogue strategy, which bring minimal data requirements, explainable choices, and strong audit capabilities.
arXiv Detail & Related papers (2023-08-03T00:10:23Z) - Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems [29.394466123216258]
This study provides a comprehensive overview of the primary characteristics of a dialogue agent, their corresponding open-domain datasets, and the methods used to benchmark these datasets.
We propose UNIT, a UNified dIalogue dataseT constructed from conversations of existing datasets for different dialogue tasks capturing the nuances for each of them.
arXiv Detail & Related papers (2023-07-14T10:05:47Z) - MindDial: Belief Dynamics Tracking with Theory-of-Mind Modeling for Situated Neural Dialogue Generation [62.44907105496227]
MindDial is a novel conversational framework that can generate situated free-form responses with theory-of-mind modeling.
We introduce an explicit mind module that can track the speaker's belief and the speaker's prediction of the listener's belief.
Our framework is applied to both prompting and fine-tuning-based models, and is evaluated across scenarios involving both common ground alignment and negotiation.
arXiv Detail & Related papers (2023-06-27T07:24:32Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - LEATHER: A Framework for Learning to Generate Human-like Text in
Dialogue [15.102346715690755]
We propose a new theoretical framework for learning to generate text in dialogue.
Compared to existing theories of learning, our framework allows for analysis of the multi-faceted goals inherent to text-generation.
arXiv Detail & Related papers (2022-10-14T13:05:11Z) - Probing Task-Oriented Dialogue Representation from Language Models [106.02947285212132]
This paper investigates pre-trained language models to find out which model intrinsically carries the most informative representation for task-oriented dialogue tasks.
We fine-tune a feed-forward layer as the classifier probe on top of a fixed pre-trained language model with annotated labels in a supervised way.
arXiv Detail & Related papers (2020-10-26T21:34:39Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.