A Clarifying Question Selection System from NTES_ALONG in Convai3
Challenge
- URL: http://arxiv.org/abs/2010.14202v3
- Date: Fri, 20 Nov 2020 04:19:33 GMT
- Title: A Clarifying Question Selection System from NTES_ALONG in Convai3
Challenge
- Authors: Wenjie Ou, Yue Lin
- Abstract summary: This paper presents the participation of NetEase Game AI Lab team for the ClariQ challenge at Search-oriented Conversational AI (SCAI) EMNLP workshop in 2020.
The challenge asks for a complete conversational information retrieval system that can understanding and generating clarification questions.
We propose a clarifying question selection system which consists of response understanding, candidate question recalling and clarifying question ranking.
- Score: 8.656503175492375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the participation of NetEase Game AI Lab team for the
ClariQ challenge at Search-oriented Conversational AI (SCAI) EMNLP workshop in
2020. The challenge asks for a complete conversational information retrieval
system that can understanding and generating clarification questions. We
propose a clarifying question selection system which consists of response
understanding, candidate question recalling and clarifying question ranking. We
fine-tune a RoBERTa model to understand user's responses and use an enhanced
BM25 model to recall the candidate questions. In clarifying question ranking
stage, we reconstruct the training dataset and propose two models based on
ELECTRA. Finally we ensemble the models by summing up their output
probabilities and choose the question with the highest probability as the
clarification question. Experiments show that our ensemble ranking model
outperforms in the document relevance task and achieves the best recall@[20,30]
metrics in question relevance task. And in multi-turn conversation evaluation
in stage2, our system achieve the top score of all document relevance metrics.
Related papers
- Multi-hop Evidence Pursuit Meets the Web: Team Papelo at FEVER 2024 [1.3923460621808879]
We show that the reasoning power of large language models (LLMs) and the retrieval power of modern search engines can be combined to automate this process.
We integrate LLMs and search under a multi-hop evidence pursuit strategy.
Our submitted system achieves.510 AVeriTeC score on the dev set and.477 AVeriTeC score on the test set.
arXiv Detail & Related papers (2024-11-08T18:25:06Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - PAQA: Toward ProActive Open-Retrieval Question Answering [34.883834970415734]
This work aims to tackle the challenge of generating relevant clarifying questions by taking into account the inherent ambiguities present in both user queries and documents.
We propose PAQA, an extension to the existing AmbiNQ dataset, incorporating clarifying questions.
We then evaluate various models and assess how passage retrieval impacts ambiguity detection and the generation of clarifying questions.
arXiv Detail & Related papers (2024-02-26T14:40:34Z) - Towards Reliable and Factual Response Generation: Detecting Unanswerable
Questions in Information-Seeking Conversations [16.99952884041096]
Generative AI models face the challenge of hallucinations that can undermine users' trust in such systems.
We approach the problem of conversational information seeking as a two-step process, where relevant passages in a corpus are identified first and then summarized into a final system response.
Specifically, our proposed method employs a sentence-level classifier to detect if the answer is present, then aggregates these predictions on the passage level, and eventually across the top-ranked passages to arrive at a final answerability estimate.
arXiv Detail & Related papers (2024-01-21T10:15:36Z) - Evaluating Mixed-initiative Conversational Search Systems via User
Simulation [9.066817876491053]
We propose a conversational User Simulator, called USi, for automatic evaluation of such search systems.
We show that responses generated by USi are both inline with the underlying information need and comparable to human-generated answers.
arXiv Detail & Related papers (2022-04-17T16:27:33Z) - Double Retrieval and Ranking for Accurate Question Answering [120.69820139008138]
We show that an answer verification step introduced in Transformer-based answer selection models can significantly improve the state of the art in Question Answering.
The results on three well-known datasets for AS2 show consistent and significant improvement of the state of the art.
arXiv Detail & Related papers (2022-01-16T06:20:07Z) - Answer Generation for Retrieval-based Question Answering Systems [80.28727681633096]
We train a sequence to sequence transformer model to generate an answer from a candidate set.
Our tests on three English AS2 datasets show improvement up to 32 absolute points in accuracy over the state of the art.
arXiv Detail & Related papers (2021-06-02T05:45:49Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - ConvAI3: Generating Clarifying Questions for Open-Domain Dialogue
Systems (ClariQ) [64.60303062063663]
This document presents a detailed description of the challenge on clarifying questions for dialogue systems (ClariQ)
The challenge is organized as part of the Conversational AI challenge series (ConvAI3) at Search Oriented Conversational AI (SCAI) EMNLP workshop in 2020.
arXiv Detail & Related papers (2020-09-23T19:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.