An Empirical Study of Clarifying Question-Based Systems
- URL: http://arxiv.org/abs/2008.00279v1
- Date: Sat, 1 Aug 2020 15:10:11 GMT
- Title: An Empirical Study of Clarifying Question-Based Systems
- Authors: Jie Zou, Evangelos Kanoulas, and Yiqun Liu
- Abstract summary: We conduct an online experiment by deploying an experimental system, which interacts with users by asking clarifying questions against a product repository.
We collect both implicit interaction behavior data and explicit feedback from users showing that: (a) users are willing to answer a good number of clarifying questions (11-21 on average), but not many more than that.
- Score: 15.767515065224016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Search and recommender systems that take the initiative to ask clarifying
questions to better understand users' information needs are receiving
increasing attention from the research community. However, to the best of our
knowledge, there is no empirical study to quantify whether and to what extent
users are willing or able to answer these questions. In this work, we conduct
an online experiment by deploying an experimental system, which interacts with
users by asking clarifying questions against a product repository. We collect
both implicit interaction behavior data and explicit feedback from users
showing that: (a) users are willing to answer a good number of clarifying
questions (11-21 on average), but not many more than that; (b) most users
answer questions until they reach the target product, but also a fraction of
them stops due to fatigue or due to receiving irrelevant questions; (c) part of
the users' answers (12-17%) are actually opposite to the description of the
target product; while (d) most of the users (66-84%) find the question-based
system helpful towards completing their tasks. Some of the findings of the
study contradict current assumptions on simulated evaluations in the field,
while they point towards improvements in the evaluation framework and can
inspire future interactive search/recommender system designs.
Related papers
- Can Users Detect Biases or Factual Errors in Generated Responses in Conversational Information-Seeking? [13.790574266700006]
We investigate the limitations of response generation in conversational information-seeking systems.
The study addresses the problem of query answerability and the challenge of response incompleteness.
Our analysis reveals that it is easier for users to detect response incompleteness than query answerability.
arXiv Detail & Related papers (2024-10-28T20:55:00Z) - QAGCF: Graph Collaborative Filtering for Q&A Recommendation [58.21387109664593]
Question and answer (Q&A) platforms usually recommend question-answer pairs to meet users' knowledge acquisition needs.
This makes user behaviors more complex, and presents two challenges for Q&A recommendation.
We introduce Question & Answer Graph Collaborative Filtering (QAGCF), a graph neural network model that creates separate graphs for collaborative and semantic views.
arXiv Detail & Related papers (2024-06-07T10:52:37Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Estimating the Usefulness of Clarifying Questions and Answers for
Conversational Search [17.0363715044341]
We propose a method for processing answers to clarifying questions, moving away from previous work that simply appends answers to the original query.
Specifically, we propose a classifier for assessing usefulness of the prompted clarifying question and an answer given by the user.
Results demonstrate significant improvements over strong non-mixed-initiative baselines.
arXiv Detail & Related papers (2024-01-21T11:04:30Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Medical Question Understanding and Answering with Knowledge Grounding
and Semantic Self-Supervision [53.692793122749414]
We introduce a medical question understanding and answering system with knowledge grounding and semantic self-supervision.
Our system is a pipeline that first summarizes a long, medical, user-written question, using a supervised summarization loss.
The system first matches the summarized user question with an FAQ from a trusted medical knowledge base, and then retrieves a fixed number of relevant sentences from the corresponding answer document.
arXiv Detail & Related papers (2022-09-30T08:20:32Z) - Interactive Question Answering Systems: Literature Review [17.033640293433397]
Interactive question answering is a recently proposed and increasingly popular solution that resides at the intersection of question answering and dialogue systems.
By permitting the user to ask more questions, interactive question answering enables users to dynamically interact with the system and receive more precise results.
This survey offers a detailed overview of the interactive question-answering methods that are prevalent in current literature.
arXiv Detail & Related papers (2022-09-04T13:46:54Z) - Improving Conversational Question Answering Systems after Deployment
using Feedback-Weighted Learning [69.42679922160684]
We propose feedback-weighted learning based on importance sampling to improve upon an initial supervised system using binary user feedback.
Our work opens the prospect to exploit interactions with real users and improve conversational systems after deployment.
arXiv Detail & Related papers (2020-11-01T19:50:34Z) - Review-guided Helpful Answer Identification in E-commerce [38.276241153439955]
Product-specific community question answering platforms can greatly help address the concerns of potential customers.
The user-provided answers on such platforms often vary a lot in their qualities.
Helpfulness votes from the community can indicate the overall quality of the answer, but they are often missing.
arXiv Detail & Related papers (2020-03-13T11:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.