Estimating the Usefulness of Clarifying Questions and Answers for
Conversational Search
- URL: http://arxiv.org/abs/2401.11463v1
- Date: Sun, 21 Jan 2024 11:04:30 GMT
- Title: Estimating the Usefulness of Clarifying Questions and Answers for
Conversational Search
- Authors: Ivan Sekuli\'c, Weronika {\L}ajewska, Krisztian Balog, Fabio Crestani
- Abstract summary: We propose a method for processing answers to clarifying questions, moving away from previous work that simply appends answers to the original query.
Specifically, we propose a classifier for assessing usefulness of the prompted clarifying question and an answer given by the user.
Results demonstrate significant improvements over strong non-mixed-initiative baselines.
- Score: 17.0363715044341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the body of research directed towards constructing and generating
clarifying questions in mixed-initiative conversational search systems is vast,
research aimed at processing and comprehending users' answers to such questions
is scarce. To this end, we present a simple yet effective method for processing
answers to clarifying questions, moving away from previous work that simply
appends answers to the original query and thus potentially degrades retrieval
performance. Specifically, we propose a classifier for assessing usefulness of
the prompted clarifying question and an answer given by the user. Useful
questions or answers are further appended to the conversation history and
passed to a transformer-based query rewriting module. Results demonstrate
significant improvements over strong non-mixed-initiative baselines.
Furthermore, the proposed approach mitigates the performance drops when non
useful questions and answers are utilized.
Related papers
- CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - Answering Unanswered Questions through Semantic Reformulations in Spoken
QA [20.216161323866867]
Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems.
We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity.
We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering.
arXiv Detail & Related papers (2023-05-27T07:19:27Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - Interactive Question Answering Systems: Literature Review [17.033640293433397]
Interactive question answering is a recently proposed and increasingly popular solution that resides at the intersection of question answering and dialogue systems.
By permitting the user to ask more questions, interactive question answering enables users to dynamically interact with the system and receive more precise results.
This survey offers a detailed overview of the interactive question-answering methods that are prevalent in current literature.
arXiv Detail & Related papers (2022-09-04T13:46:54Z) - Building and Evaluating Open-Domain Dialogue Corpora with Clarifying
Questions [65.60888490988236]
We release a dataset focused on open-domain single- and multi-turn conversations.
We benchmark several state-of-the-art neural baselines.
We propose a pipeline consisting of offline and online steps for evaluating the quality of clarifying questions in various dialogues.
arXiv Detail & Related papers (2021-09-13T09:16:14Z) - A Graph-guided Multi-round Retrieval Method for Conversational
Open-domain Question Answering [52.041815783025186]
We propose a novel graph-guided retrieval method to model the relations among answers across conversation turns.
We also propose to incorporate the multi-round relevance feedback technique to explore the impact of the retrieval context on current question understanding.
arXiv Detail & Related papers (2021-04-17T04:39:41Z) - Analysing the Effect of Clarifying Questions on Document Ranking in
Conversational Search [10.335808358080289]
We investigate how different aspects of clarifying questions and user answers affect the quality of ranking.
We introduce a simple-based lexical baseline, that significantly outperforms the existing naive baselines.
arXiv Detail & Related papers (2020-08-09T12:55:16Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z) - Review-guided Helpful Answer Identification in E-commerce [38.276241153439955]
Product-specific community question answering platforms can greatly help address the concerns of potential customers.
The user-provided answers on such platforms often vary a lot in their qualities.
Helpfulness votes from the community can indicate the overall quality of the answer, but they are often missing.
arXiv Detail & Related papers (2020-03-13T11:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.