Multi-stage Clarification in Conversational AI: The case of
Question-Answering Dialogue Systems
- URL: http://arxiv.org/abs/2110.15235v1
- Date: Thu, 28 Oct 2021 15:45:44 GMT
- Title: Multi-stage Clarification in Conversational AI: The case of
Question-Answering Dialogue Systems
- Authors: Hadrien Lautraite, Nada Naji, Louis Marceau, Marc Queudot, Eric
Charton
- Abstract summary: Clarification resolution plays an important role in various information retrieval tasks such as interactive question answering and conversational search.
We propose a multi-stage clarification mechanism for prompting clarification and query selection in the context of a question answering dialogue system.
Our proposed mechanism improves the overall user experience and outperforms competitive baselines with two datasets.
- Score: 0.27998963147546135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Clarification resolution plays an important role in various information
retrieval tasks such as interactive question answering and conversational
search. In such context, the user often formulates their information needs as
short and ambiguous queries, some popular search interfaces then prompt the
user to confirm her intent (e.g. "Did you mean ... ?") or to rephrase if
needed. When it comes to dialogue systems, having fluid user-bot exchanges is
key to good user experience. In the absence of such clarification mechanism,
one of the following responses is given to the user: 1) A direct answer, which
can potentially be non-relevant if the intent was not clear, 2) a generic
fallback message informing the user that the retrieval tool is incapable of
handling the query. Both scenarios might raise frustration and degrade the user
experience. To this end, we propose a multi-stage clarification mechanism for
prompting clarification and query selection in the context of a question
answering dialogue system. We show that our proposed mechanism improves the
overall user experience and outperforms competitive baselines with two
datasets, namely the public in-scope out-of-scope dataset and a commercial
dataset based on real user logs.
Related papers
- Can Users Detect Biases or Factual Errors in Generated Responses in Conversational Information-Seeking? [13.790574266700006]
We investigate the limitations of response generation in conversational information-seeking systems.
The study addresses the problem of query answerability and the challenge of response incompleteness.
Our analysis reveals that it is easier for users to detect response incompleteness than query answerability.
arXiv Detail & Related papers (2024-10-28T20:55:00Z) - Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users [51.34484827552774]
We release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent.
These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios.
We propose a novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query.
arXiv Detail & Related papers (2023-10-31T14:12:07Z) - Social Commonsense-Guided Search Query Generation for Open-Domain
Knowledge-Powered Conversations [66.16863141262506]
We present a novel approach that focuses on generating internet search queries guided by social commonsense.
Our proposed framework addresses passive user interactions by integrating topic tracking, commonsense response generation and instruction-driven query generation.
arXiv Detail & Related papers (2023-10-22T16:14:56Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions [65.60888490988236]
We release a dataset focused on open-domain single- and multi-turn conversations.
We benchmark several state-of-the-art neural baselines.
We propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues.
arXiv Detail & Related papers (2021-09-13T09:16:14Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Interactive Question Clarification in Dialogue via Reinforcement
Learning [36.746578601398866]
We propose a reinforcement model to clarify ambiguous questions by suggesting refinements of the original query.
The model is trained using reinforcement learning with a deep policy network.
We evaluate our model based on real-world user clicks and demonstrate significant improvements.
arXiv Detail & Related papers (2020-12-17T06:38:04Z) - Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries [3.593955557310285]
Most dialogue systems rely on hybrid approaches for generating a set of ranked responses.
We design a neural approach which generates responses which are contextually aware with the user query.
Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs.
arXiv Detail & Related papers (2020-12-03T12:34:22Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.