Understanding Common Ground Misalignment in Goal-Oriented Dialog: A Case-Study with Ubuntu Chat Logs
- URL: http://arxiv.org/abs/2503.12370v2
- Date: Sat, 26 Jul 2025 07:44:47 GMT
- Title: Understanding Common Ground Misalignment in Goal-Oriented Dialog: A Case-Study with Ubuntu Chat Logs
- Authors: Rupak Sarkar, Neha Srikanth, Taylor Hudson, Rachel Rudinger, Claire Bonial, Philip Resnik,
- Abstract summary: We study failures of grounding in the Ubuntu IRC dataset, where participants use text-only communication to resolve technical issues.<n>We find that disruptions in conversational flow often stem from a misalignment in common ground, driven by a divergence in beliefs and assumptions held by participants.
- Score: 21.001649494291712
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: While it is commonly accepted that maintaining common ground plays a role in conversational success, little prior research exists connecting conversational grounding to success in task-oriented conversations. We study failures of grounding in the Ubuntu IRC dataset, where participants use text-only communication to resolve technical issues. We find that disruptions in conversational flow often stem from a misalignment in common ground, driven by a divergence in beliefs and assumptions held by participants. These disruptions, which we call conversational friction, significantly correlate with task success. We find that although LLMs can identify overt cases of conversational friction, they struggle with subtler and more context-dependent instances requiring pragmatic or domain-specific reasoning.
Related papers
- Frame of Reference: Addressing the Challenges of Common Ground Representation in Situational Dialogs [2.730457204085116]
Common ground plays a critical role in situated spoken dialogues, where interlocutors must maintain shared references to entities, events, and relations to sustain coherent interaction.<n>We evaluate a model's ability to establish and exploit common ground through relational references to entities within the shared context in a situational dialogue.
arXiv Detail & Related papers (2026-01-14T10:45:22Z) - Understanding and Predicting Derailment in Toxic Conversations on GitHub [6.343946534579351]
This study aims to understand and predict conversational derailment leading to toxicity on GitHub.<n>Based on this dataset, we identify unique characteristics of toxic conversations and derailment points.<n>We propose a proactive moderation approach to automatically detect and address potentially harmful conversations before escalation.
arXiv Detail & Related papers (2025-03-04T02:01:37Z) - Common Ground Tracking in Multimodal Dialogue [13.763043173931024]
We present a method for automatically identifying the current set of shared beliefs and questions under discussion'' (QUDs) of a group with a shared goal.
We annotate a dataset of multimodal interactions in a shared physical space with speech transcriptions, prosodic features, gestures, actions, and facets of collaboration.
We cascade into a set of formal closure rules derived from situated evidence and belief axioms and update operations.
arXiv Detail & Related papers (2024-03-26T00:25:01Z) - Conversational Grounding: Annotation and Analysis of Grounding Acts and Grounding Units [3.805394793605586]
We present the annotation of two dialog corpora employing Grounding Acts, Grounding Units, and a measure of their degree of grounding.
Our work aims to make conversations with machines better understood and more reliable in natural day-to-day collaborative dialogs.
arXiv Detail & Related papers (2024-03-25T10:39:18Z) - Reasoning in Conversation: Solving Subjective Tasks through Dialogue
Simulation for Large Language Models [56.93074140619464]
We propose RiC (Reasoning in Conversation), a method that focuses on solving subjective tasks through dialogue simulation.
The motivation of RiC is to mine useful contextual information by simulating dialogues instead of supplying chain-of-thought style rationales.
We evaluate both API-based and open-source LLMs including GPT-4, ChatGPT, and OpenChat across twelve tasks.
arXiv Detail & Related papers (2024-02-27T05:37:10Z) - Grounding Gaps in Language Model Generations [67.79817087930678]
We study whether large language models generate text that reflects human grounding.
We find that -- compared to humans -- LLMs generate language with less conversational grounding.
To understand the roots of the identified grounding gap, we examine the role of instruction tuning and preference optimization.
arXiv Detail & Related papers (2023-11-15T17:40:27Z) - Thread of Thought Unraveling Chaotic Contexts [133.24935874034782]
"Thread of Thought" (ThoT) strategy draws inspiration from human cognitive processes.
In experiments, ThoT significantly improves reasoning performance compared to other prompting techniques.
arXiv Detail & Related papers (2023-11-15T06:54:44Z) - SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented
Dialogue Agents [72.42049370297849]
SpokenWOZ is a large-scale speech-text dataset for spoken TOD.
Cross-turn slot and reasoning slot detection are new challenges for SpokenWOZ.
arXiv Detail & Related papers (2023-05-22T13:47:51Z) - PK-ICR: Persona-Knowledge Interactive Context Retrieval for Grounded Dialogue [21.266410719325208]
Persona and Knowledge Dual Context Identification is a task to identify persona and knowledge jointly for a given dialogue.
We develop a novel grounding retrieval method that utilizes all contexts of dialogue simultaneously.
arXiv Detail & Related papers (2023-02-13T20:27:26Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Disentangling Online Chats with DAG-Structured LSTMs [55.33014148383343]
DAG-LSTMs are a generalization of Tree-LSTMs that can handle directed acyclic dependencies.
We show that the novel model we propose achieves state of the art status on the task of recovering reply-to relations.
arXiv Detail & Related papers (2021-06-16T18:00:00Z) - Who Responded to Whom: The Joint Effects of Latent Topics and Discourse
in Conversation Structure [53.77234444565652]
We identify the responding relations in the conversation discourse, which link response utterances to their initiations.
We propose a model to learn latent topics and discourse in word distributions, and predict pairwise initiation-response links.
Experimental results on both English and Chinese conversations show that our model significantly outperforms the previous state of the arts.
arXiv Detail & Related papers (2021-04-17T17:46:00Z) - Online Conversation Disentanglement with Pointer Networks [13.063606578730449]
We propose an end-to-end online framework for conversation disentanglement.
We design a novel way to embed the whole utterance that comprises timestamp, speaker, and message text.
Our experiments on the Ubuntu IRC dataset show that our method achieves state-of-the-art performance in both link and conversation prediction tasks.
arXiv Detail & Related papers (2020-10-21T15:43:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.