Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions
- URL: http://arxiv.org/abs/2109.05794v1
- Date: Mon, 13 Sep 2021 09:16:14 GMT
- Title: Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions
- Authors: Mohammad Aliannejadi, Julia Kiseleva, Aleksandr Chuklin, Jeffrey
Dalton, Mikhail Burtsev
- Abstract summary: We release a dataset focused on open-domain single- and multi-turn conversations.
We benchmark several state-of-the-art neural baselines.
We propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues.
- Score: 65.60888490988236
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enabling open-domain dialogue systems to ask clarifying questions when
appropriate is an important direction for improving the quality of the system
response. Namely, for cases when a user request is not specific enough for a
conversation system to provide an answer right away, it is desirable to ask a
clarifying question to increase the chances of retrieving a satisfying answer.
To address the problem of 'asking clarifying questions in open-domain
dialogues': (1) we collect and release a new dataset focused on open-domain
single- and multi-turn conversations, (2) we benchmark several state-of-the-art
neural baselines, and (3) we propose a pipeline consisting of offline and
online steps for evaluating the quality of clarifying questions in various
dialogues. These contributions are suitable as a foundation for further
research.
Related papers
- Social Commonsense-Guided Search Query Generation for Open-Domain
Knowledge-Powered Conversations [66.16863141262506]
We present a novel approach that focuses on generating internet search queries guided by social commonsense.
Our proposed framework addresses passive user interactions by integrating topic tracking, commonsense response generation and instruction-driven query generation.
arXiv Detail & Related papers (2023-10-22T16:14:56Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Interactive Question Answering Systems: Literature Review [17.033640293433397]
Interactive question answering is a recently proposed and increasingly popular solution that resides at the intersection of question answering and dialogue systems.
By permitting the user to ask more questions, interactive question answering enables users to dynamically interact with the system and receive more precise results.
This survey offers a detailed overview of the interactive question-answering methods that are prevalent in current literature.
arXiv Detail & Related papers (2022-09-04T13:46:54Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Saying No is An Art: Contextualized Fallback Responses for Unanswerable
Dialogue Queries [3.593955557310285]
Most dialogue systems rely on hybrid approaches for generating a set of ranked responses.
We design a neural approach which generates responses which are contextually aware with the user query.
Our simple approach makes use of rules over dependency parses and a text-to-text transformer fine-tuned on synthetic data of question-response pairs.
arXiv Detail & Related papers (2020-12-03T12:34:22Z) - ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue
Systems (ClariQ) [64.60303062063663]
This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ)
The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020.
arXiv Detail & Related papers (2020-09-23T19:48:02Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.