What Types of Questions Require Conversation to Answer? A Case Study of
AskReddit Questions
- URL: http://arxiv.org/abs/2303.17710v2
- Date: Mon, 3 Apr 2023 21:14:54 GMT
- Title: What Types of Questions Require Conversation to Answer? A Case Study of
AskReddit Questions
- Authors: Shih-Hong Huang, Chieh-Yang Huang, Ya-Fang Lin, Ting-Hao 'Kenneth'
Huang
- Abstract summary: We aim to push the boundaries of conversational systems by examining the types of nebulous, open-ended questions that can best be answered through conversation.
We sampled 500 questions from one million open-ended requests posted on AskReddit, and then recruited online crowd workers to answer eight inquiries about these questions.
We found that the issues people believe require conversation to resolve satisfactorily are highly social and personal.
- Score: 16.75969771718778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of automated conversational systems such as chatbots,
spoken-dialogue systems, and smart speakers, has significantly impacted modern
digital life. However, these systems are primarily designed to provide answers
to well-defined questions rather than to support users in exploring complex,
ill-defined questions. In this paper, we aim to push the boundaries of
conversational systems by examining the types of nebulous, open-ended questions
that can best be answered through conversation. We first sampled 500 questions
from one million open-ended requests posted on AskReddit, and then recruited
online crowd workers to answer eight inquiries about these questions. We also
performed open coding to categorize the questions into 27 different domains. We
found that the issues people believe require conversation to resolve
satisfactorily are highly social and personal. Our work provides insights into
how future research could be geared to align with users' needs.
Related papers
- Auto FAQ Generation [0.0]
We propose a system for generating FAQ documents that extract the salient questions and their corresponding answers from sizeable text documents.
We use existing text summarization, sentence ranking via the Text rank algorithm, and question-generation tools to create an initial set of questions and answers.
arXiv Detail & Related papers (2024-05-13T03:30:27Z) - Which questions should I answer? Salience Prediction of Inquisitive Questions [118.097974193544]
We show that highly salient questions are empirically more likely to be answered in the same article.
We further validate our findings by showing that answering salient questions is an indicator of summarization quality in news.
arXiv Detail & Related papers (2024-04-16T21:33:05Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - HeySQuAD: A Spoken Question Answering Dataset [2.3881849082514153]
This study presents a new large-scale community-shared SQA dataset called HeySQuAD.
Our goal is to measure the ability of machines to accurately understand noisy spoken questions and provide reliable answers.
arXiv Detail & Related papers (2023-04-26T17:15:39Z) - Interactive Question Answering Systems: Literature Review [17.033640293433397]
Interactive question answering is a recently proposed and increasingly popular solution that resides at the intersection of question answering and dialogue systems.
By permitting the user to ask more questions, interactive question answering enables users to dynamically interact with the system and receive more precise results.
This survey offers a detailed overview of the interactive question-answering methods that are prevalent in current literature.
arXiv Detail & Related papers (2022-09-04T13:46:54Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - What makes us curious? analysis of a corpus of open-domain questions [0.11470070927586014]
In 2017, "We the Curious" science centre in Bristol started a project to capture the curiosity of Bristolians.
The project collected more than 10,000 questions on various topics.
We developed an Artificial Intelligence tool that can be used to perform various processing tasks.
arXiv Detail & Related papers (2021-10-28T19:37:43Z) - Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions [65.60888490988236]
We release a dataset focused on open-domain single- and multi-turn conversations.
We benchmark several state-of-the-art neural baselines.
We propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues.
arXiv Detail & Related papers (2021-09-13T09:16:14Z) - QAConv: Question Answering on Informative Conversations [85.2923607672282]
We focus on informative conversations including business emails, panel discussions, and work channels.
In total, we collect 34,204 QA pairs, including span-based, free-form, and unanswerable questions.
arXiv Detail & Related papers (2021-05-14T15:53:05Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue
Systems (ClariQ) [64.60303062063663]
This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ)
The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020.
arXiv Detail & Related papers (2020-09-23T19:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.