Analysing Mixed Initiatives and Search Strategies during Conversational
Search
- URL: http://arxiv.org/abs/2109.05955v1
- Date: Mon, 13 Sep 2021 13:30:10 GMT
- Title: Analysing Mixed Initiatives and Search Strategies during Conversational
Search
- Authors: Mohammad Aliannejadi, Leif Azzopardi, Hamed Zamani, Evangelos
Kanoulas, Paul Thomas, Nick Craswel
- Abstract summary: We present a model for conversational search -- from which we instantiate different observed conversational search strategies, where the agent elicits: (i) Feedback-First, or (ii) Feedback-After.
Our analysis reveals that there is no superior or dominant combination, instead it shows that query clarifications are better when asked first, while query suggestions are better when asked after presenting results.
- Score: 31.63357369175702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Information seeking conversations between users and Conversational Search
Agents (CSAs) consist of multiple turns of interaction. While users initiate a
search session, ideally a CSA should sometimes take the lead in the
conversation by obtaining feedback from the user by offering query suggestions
or asking for query clarifications i.e. mixed initiative. This creates the
potential for more engaging conversational searches, but substantially
increases the complexity of modelling and evaluating such scenarios due to the
large interaction space coupled with the trade-offs between the costs and
benefits of the different interactions. In this paper, we present a model for
conversational search -- from which we instantiate different observed
conversational search strategies, where the agent elicits: (i) Feedback-First,
or (ii) Feedback-After. Using 49 TREC WebTrack Topics, we performed an analysis
comparing how well these different strategies combine with different mixed
initiative approaches: (i) Query Suggestions vs. (ii) Query Clarifications. Our
analysis reveals that there is no superior or dominant combination, instead it
shows that query clarifications are better when asked first, while query
suggestions are better when asked after presenting results. We also show that
the best strategy and approach depends on the trade-offs between the relative
costs between querying and giving feedback, the performance of the initial
query, the number of assessments per query, and the total amount of gain
required. While this work highlights the complexities and challenges involved
in analyzing CSAs, it provides the foundations for evaluating conversational
strategies and conversational search agents in batch/offline settings.
Related papers
- Aligning Query Representation with Rewritten Query and Relevance Judgments in Conversational Search [32.35446999027349]
We leverage both rewritten queries and relevance judgments in the conversational search data to train a better query representation model.
The proposed model -- Query Representation Alignment Conversational Retriever, QRACDR, is tested on eight datasets.
arXiv Detail & Related papers (2024-07-29T17:14:36Z) - ProCIS: A Benchmark for Proactive Retrieval in Conversations [21.23826888841565]
We introduce a large-scale dataset for proactive document retrieval that consists of over 2.8 million conversations.
We conduct crowdsourcing experiments to obtain high-quality and relatively complete relevance judgments.
We also collect annotations related to the parts of the conversation that are related to each document, enabling us to evaluate proactive retrieval systems.
arXiv Detail & Related papers (2024-05-10T13:11:07Z) - Selecting Query-bag as Pseudo Relevance Feedback for Information-seeking Conversations [76.70349332096693]
Information-seeking dialogue systems are widely used in e-commerce systems.
We propose a Query-bag based Pseudo Relevance Feedback framework (QB-PRF)
It constructs a query-bag with related queries to serve as pseudo signals to guide information-seeking conversations.
arXiv Detail & Related papers (2024-03-22T08:10:32Z) - Social Commonsense-Guided Search Query Generation for Open-Domain
Knowledge-Powered Conversations [66.16863141262506]
We present a novel approach that focuses on generating internet search queries guided by social commonsense.
Our proposed framework addresses passive user interactions by integrating topic tracking, commonsense response generation and instruction-driven query generation.
arXiv Detail & Related papers (2023-10-22T16:14:56Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - Towards Building Economic Models of Conversational Search [17.732575878508566]
We develop two economic models of conversational search based on patterns previously observed during search sessions.
Our models show that the amount of feedback given/requested depends on its efficiency at improving the initial or subsequent query.
arXiv Detail & Related papers (2022-01-21T15:20:51Z) - Conversational Recommendation: Theoretical Model and Complexity Analysis [6.084774669743511]
We present a theoretical, domain-independent model of conversational recommendation.
We show that finding an efficient conversational strategy is NP-hard.
We also show that catalog characteristics can strongly influence the efficiency of individual conversational strategies.
arXiv Detail & Related papers (2021-11-10T09:05:52Z) - Open-Retrieval Conversational Question Answering [62.11228261293487]
We introduce an open-retrieval conversational question answering (ORConvQA) setting, where we learn to retrieve evidence from a large collection before extracting answers.
We build an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers.
arXiv Detail & Related papers (2020-05-22T19:39:50Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.