Conversational Question Answering: A Survey
- URL: http://arxiv.org/abs/2106.00874v2
- Date: Thu, 3 Jun 2021 01:02:38 GMT
- Title: Conversational Question Answering: A Survey
- Authors: Munazza Zaib and Wei Emma Zhang and Quan Z. Sheng and Adnan Mahmood
and Yang Zhang
- Abstract summary: This survey is an effort to present a comprehensive review of the state-of-the-art research trends of Conversational Question Answering (CQA)
Our findings show that there has been a trend shift from single-turn to multi-turn QA which empowers the field of Conversational AI from different perspectives.
- Score: 18.447856993867788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question answering (QA) systems provide a way of querying the information
available in various formats including, but not limited to, unstructured and
structured data in natural languages. It constitutes a considerable part of
conversational artificial intelligence (AI) which has led to the introduction
of a special research topic on Conversational Question Answering (CQA), wherein
a system is required to understand the given context and then engages in
multi-turn QA to satisfy the user's information needs. Whilst the focus of most
of the existing research work is subjected to single-turn QA, the field of
multi-turn QA has recently grasped attention and prominence owing to the
availability of large-scale, multi-turn QA datasets and the development of
pre-trained language models. With a good amount of models and research papers
adding to the literature every year recently, there is a dire need of arranging
and presenting the related work in a unified manner to streamline future
research. This survey, therefore, is an effort to present a comprehensive
review of the state-of-the-art research trends of CQA primarily based on
reviewed papers from 2016-2021. Our findings show that there has been a trend
shift from single-turn to multi-turn QA which empowers the field of
Conversational AI from different perspectives. This survey is intended to
provide an epitome for the research community with the hope of laying a strong
foundation for the field of CQA.
Related papers
- PeerQA: A Scientific Question Answering Dataset from Peer Reviews [51.95579001315713]
We present PeerQA, a real-world, scientific, document-level Question Answering dataset.
The dataset contains 579 QA pairs from 208 academic articles, with a majority from ML and NLP.
We provide a detailed analysis of the collected dataset and conduct experiments establishing baseline systems for all three tasks.
arXiv Detail & Related papers (2025-02-19T12:24:46Z) - Visual question answering: from early developments to recent advances -- a survey [11.729464930866483]
Visual Question Answering (VQA) is an evolving research field aimed at enabling machines to answer questions about visual content.
VQA has gained significant attention due to its broad applications, including interactive educational tools, medical image diagnosis, customer service, entertainment, and social media captioning.
arXiv Detail & Related papers (2025-01-07T17:00:35Z) - Towards Robust Evaluation: A Comprehensive Taxonomy of Datasets and Metrics for Open Domain Question Answering in the Era of Large Language Models [0.0]
Open Domain Question Answering (ODQA) within natural language processing involves building systems that answer factual questions using large-scale knowledge corpora.
High-quality datasets are used to train models on realistic scenarios.
Standardized metrics facilitate comparisons between different ODQA systems.
arXiv Detail & Related papers (2024-06-19T05:43:02Z) - Around the GLOBE: Numerical Aggregation Question-Answering on
Heterogeneous Genealogical Knowledge Graphs with Deep Neural Networks [0.934612743192798]
We present a new end-to-end methodology for numerical aggregation QA for genealogical trees.
The proposed architecture, GLOBE, outperforms the state-of-the-art models and pipelines by achieving 87% accuracy for this task.
This study may have practical implications for genealogical information centers and museums.
arXiv Detail & Related papers (2023-07-30T12:09:00Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - ProQA: Structural Prompt-based Pre-training for Unified Question
Answering [84.59636806421204]
ProQA is a unified QA paradigm that solves various tasks through a single model.
It concurrently models the knowledge generalization for all QA tasks while keeping the knowledge customization for every specific QA task.
ProQA consistently boosts performance on both full data fine-tuning, few-shot learning, and zero-shot testing scenarios.
arXiv Detail & Related papers (2022-05-09T04:59:26Z) - NoiseQA: Challenge Set Evaluation for User-Centric Question Answering [68.67783808426292]
We show that components in the pipeline that precede an answering engine can introduce varied and considerable sources of error.
We conclude that there is substantial room for progress before QA systems can be effectively deployed.
arXiv Detail & Related papers (2021-02-16T18:35:29Z) - Retrieving and Reading: A Comprehensive Survey on Open-domain Question
Answering [62.88322725956294]
We review the latest research trends in OpenQA, with particular attention to systems that incorporate neural MRC techniques.
We introduce modern OpenQA architecture named Retriever-Reader'' and analyze the various systems that follow this architecture.
We then discuss key challenges to developing OpenQA systems and offer an analysis of benchmarks that are commonly used.
arXiv Detail & Related papers (2021-01-04T04:47:46Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.