Analysing the Effect of Clarifying Questions on Document Ranking in
Conversational Search
- URL: http://arxiv.org/abs/2008.03717v2
- Date: Tue, 11 Aug 2020 10:21:13 GMT
- Title: Analysing the Effect of Clarifying Questions on Document Ranking in
Conversational Search
- Authors: Antonios Minas Krasakis, Mohammad Aliannejadi, Nikos Voskarides,
Evangelos Kanoulas
- Abstract summary: We investigate how different aspects of clarifying questions and user answers affect the quality of ranking.
We introduce a simple-based lexical baseline, that significantly outperforms the existing naive baselines.
- Score: 10.335808358080289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research on conversational search highlights the importance of
mixed-initiative in conversations. To enable mixed-initiative, the system
should be able to ask clarifying questions to the user. However, the ability of
the underlying ranking models (which support conversational search) to account
for these clarifying questions and answers has not been analysed when ranking
documents, at large. To this end, we analyse the performance of a lexical
ranking model on a conversational search dataset with clarifying questions. We
investigate, both quantitatively and qualitatively, how different aspects of
clarifying questions and user answers affect the quality of ranking. We argue
that there needs to be some fine-grained treatment of the entire conversational
round of clarification, based on the explicit feedback which is present in such
mixed-initiative settings. Informed by our findings, we introduce a simple
heuristic-based lexical baseline, that significantly outperforms the existing
naive baselines. Our work aims to enhance our understanding of the challenges
present in this particular task and inform the design of more appropriate
conversational ranking models.
Related papers
- PAQA: Toward ProActive Open-Retrieval Question Answering [34.883834970415734]
This work aims to tackle the challenge of generating relevant clarifying questions by taking into account the inherent ambiguities present in both user queries and documents.
We propose PAQA, an extension to the existing AmbiNQ dataset, incorporating clarifying questions.
We then evaluate various models and assess how passage retrieval impacts ambiguity detection and the generation of clarifying questions.
arXiv Detail & Related papers (2024-02-26T14:40:34Z) - Qsnail: A Questionnaire Dataset for Sequential Question Generation [76.616068047362]
We present the first dataset specifically constructed for the questionnaire generation task, which comprises 13,168 human-written questionnaires.
We conduct experiments on Qsnail, and the results reveal that retrieval models and traditional generative models do not fully align with the given research topic and intents.
Despite enhancements through the chain-of-thought prompt and finetuning, questionnaires generated by language models still fall short of human-written questionnaires.
arXiv Detail & Related papers (2024-02-22T04:14:10Z) - Asking Multimodal Clarifying Questions in Mixed-Initiative
Conversational Search [89.1772985740272]
In mixed-initiative conversational search systems, clarifying questions are used to help users who struggle to express their intentions in a single query.
We hypothesize that in scenarios where multimodal information is pertinent, the clarification process can be improved by using non-textual information.
We collect a dataset named Melon that contains over 4k multimodal clarifying questions, enriched with over 14k images.
Several analyses are conducted to understand the importance of multimodal contents during the query clarification phase.
arXiv Detail & Related papers (2024-02-12T16:04:01Z) - Estimating the Usefulness of Clarifying Questions and Answers for
Conversational Search [17.0363715044341]
We propose a method for processing answers to clarifying questions, moving away from previous work that simply appends answers to the original query.
Specifically, we propose a classifier for assessing usefulness of the prompted clarifying question and an answer given by the user.
Results demonstrate significant improvements over strong non-mixed-initiative baselines.
arXiv Detail & Related papers (2024-01-21T11:04:30Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Conversational QA Dataset Generation with Answer Revision [2.5838973036257458]
We introduce a novel framework that extracts question-worthy phrases from a passage and then generates corresponding questions considering previous conversations.
Our framework revises the extracted answers after generating questions so that answers exactly match paired questions.
arXiv Detail & Related papers (2022-09-23T04:05:38Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - Question rewriting? Assessing its importance for conversational question
answering [0.6449761153631166]
This work presents a conversational question answering system designed specifically for the Search-Oriented Conversational AI (SCAI) shared task.
In particular, we considered different variations of the question rewriting module to evaluate the influence on the subsequent components.
Our system achieved the best performance in the shared task and our analysis emphasizes the importance of the conversation context representation for the overall system performance.
arXiv Detail & Related papers (2022-01-22T23:31:25Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.