Multifaceted Improvements for Conversational Open-Domain Question
Answering
- URL: http://arxiv.org/abs/2204.00266v1
- Date: Fri, 1 Apr 2022 07:54:27 GMT
- Title: Multifaceted Improvements for Conversational Open-Domain Question
Answering
- Authors: Tingting Liang, Yixuan Jiang, Congying Xia, Ziqiang Zhao, Yuyu Yin,
Philip S. Yu
- Abstract summary: We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
- Score: 54.913313912927045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Open-domain question answering (OpenQA) is an important branch of textual QA
which discovers answers for the given questions based on a large number of
unstructured documents. Effectively mining correct answers from the open-domain
sources still has a fair way to go. Existing OpenQA systems might suffer from
the issues of question complexity and ambiguity, as well as insufficient
background knowledge. Recently, conversational OpenQA is proposed to address
these issues with the abundant contextual information in the conversation.
Promising as it might be, there exist several fundamental limitations including
the inaccurate question understanding, the coarse ranking for passage
selection, and the inconsistent usage of golden passage in the training and
inference phases. To alleviate these limitations, in this paper, we propose a
framework with Multifaceted Improvements for Conversational open-domain
Question Answering (MICQA). Specifically, MICQA has three significant
advantages. First, the proposed KL-divergence based regularization is able to
lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top
placements and be selected for reader with a two-aspect constrains. Third, the
well designed curriculum learning strategy effectively narrows the gap between
the golden passage settings of training and inference, and encourages the
reader to find true answer without the golden passage assistance. Extensive
experiments conducted on the publicly available dataset OR-QuAC demonstrate the
superiority of MICQA over the state-of-the-art model in conversational OpenQA
task.
Related papers
- QPaug: Question and Passage Augmentation for Open-Domain Question Answering of LLMs [5.09189220106765]
We propose a simple yet efficient method called question and passage augmentation (QPaug) via large language models (LLMs) for open-domain question-answering tasks.
Experimental results show that QPaug outperforms the previous state-of-the-art and achieves significant performance gain over existing RAG methods.
arXiv Detail & Related papers (2024-06-20T12:59:27Z) - MFORT-QA: Multi-hop Few-shot Open Rich Table Question Answering [3.1651118728570635]
In today's fast-paced industry, professionals face the challenge of summarizing a large number of documents and extracting vital information from them on a daily basis.
To address this challenge, the approach of Table Question Answering (QA) has been developed to extract the relevant information.
Recent advancements in Large Language Models (LLMs) have opened up new possibilities for extracting information from tabular data using prompts.
arXiv Detail & Related papers (2024-03-28T03:14:18Z) - Merging Generated and Retrieved Knowledge for Open-Domain QA [72.42262579925911]
COMBO is a compatibility-Oriented knowledge Merging for Better Open-domain QA framework.
We show that COMBO outperforms competitive baselines on three out of four tested open-domain QA benchmarks.
arXiv Detail & Related papers (2023-10-22T19:37:06Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - QAConv: Question Answering on Informative Conversations [85.2923607672282]
We focus on informative conversations including business emails, panel discussions, and work channels.
In total, we collect 34,204 QA pairs, including span-based, free-form, and unanswerable questions.
arXiv Detail & Related papers (2021-05-14T15:53:05Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z) - DoQA -- Accessing Domain-Specific FAQs via Conversational QA [25.37327993590628]
We present DoQA, a dataset with 2,437 dialogues and 10,917 QA pairs.
The dialogues are collected from three Stack Exchange sites using the Wizard of Oz method with crowdsourcing.
arXiv Detail & Related papers (2020-05-04T08:58:54Z) - Conversational Question Answering over Passages by Leveraging Word
Proximity Networks [33.59664244897881]
CROWN is an unsupervised yet effective system for conversational QA with passage responses.
It supports several modes of context propagation over multiple turns.
CROWN was evaluated on TREC CAsT data, where it achieved above-median performance in a pool of neural methods.
arXiv Detail & Related papers (2020-04-27T19:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.