Technical Question Answering across Tasks and Domains
- URL: http://arxiv.org/abs/2010.09780v2
- Date: Tue, 18 May 2021 05:00:18 GMT
- Title: Technical Question Answering across Tasks and Domains
- Authors: Wenhao Yu, Lingfei Wu, Yu Deng, Qingkai Zeng, Ruchi Mahindru, Sinem
Guven, Meng Jiang
- Abstract summary: We present an adjustable joint learning approach for document retrieval and reading comprehension tasks.
Our experiments on the TechQA demonstrates superior performance compared with state-of-the-art methods.
- Score: 47.80330046038137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building automatic technical support system is an important yet challenge
task. Conceptually, to answer a user question on a technical forum, a human
expert has to first retrieve relevant documents, and then read them carefully
to identify the answer snippet. Despite huge success the researchers have
achieved in coping with general domain question answering (QA), much less
attentions have been paid for investigating technical QA. Specifically,
existing methods suffer from several unique challenges (i) the question and
answer rarely overlaps substantially and (ii) very limited data size. In this
paper, we propose a novel framework of deep transfer learning to effectively
address technical QA across tasks and domains. To this end, we present an
adjustable joint learning approach for document retrieval and reading
comprehension tasks. Our experiments on the TechQA demonstrates superior
performance compared with state-of-the-art methods.
Related papers
- Exploring the State of the Art in Legal QA Systems [20.178251855026684]
Question answering (QA) systems are designed to generate answers to questions asked in human languages.
QA has various practical applications, including customer service, education, research, and cross-lingual communication.
We provide a comprehensive survey that reviews 14 benchmark datasets for question-answering in the legal field.
arXiv Detail & Related papers (2023-04-13T15:48:01Z) - Modern Question Answering Datasets and Benchmarks: A Survey [5.026863544662493]
Question Answering (QA) is one of the most important natural language processing (NLP) tasks.
It aims using NLP technologies to generate a corresponding answer to a given question based on the massive unstructured corpus.
In this paper, we investigate influential QA datasets that have been released in the era of deep learning.
arXiv Detail & Related papers (2022-06-30T05:53:56Z) - Asking the Right Questions in Low Resource Template Extraction [37.77304148934836]
We ask whether end users of TE systems can design these questions, and whether it is beneficial to involve an NLP practitioner in the process.
We propose a novel model to perform TE with prompts, and find it benefits from questions over other styles of prompts.
arXiv Detail & Related papers (2022-05-25T10:39:09Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Towards Collaborative Question Answering: A Preliminary Study [63.91687114660126]
We propose CollabQA, a novel QA task in which several expert agents coordinated by a moderator work together to answer questions that cannot be answered with any single agent alone.
We make a synthetic dataset of a large knowledge graph that can be distributed to experts.
We show that the problem can be challenging without introducing prior to the collaboration structure, unless experts are perfect and uniform.
arXiv Detail & Related papers (2022-01-24T14:27:00Z) - Achieving Human Parity on Visual Question Answering [67.22500027651509]
The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image.
This paper describes our recent research of AliceMind-MMU that obtains similar or even slightly better results than human beings does on VQA.
This is achieved by systematically improving the VQA pipeline including: (1) pre-training with comprehensive visual and textual feature representation; (2) effective cross-modal interaction with learning to attend; and (3) A novel knowledge mining framework with specialized expert modules for the complex VQA task.
arXiv Detail & Related papers (2021-11-17T04:25:11Z) - Complex Knowledge Base Question Answering: A Survey [41.680033017518376]
Knowledge base question answering (KBQA) aims to answer a question over a knowledge base (KB)
In recent years, researchers propose a large number of novel methods, which looked into the challenges of answering complex questions.
We present two mainstream categories of methods for complex KBQA, namely semantic parsing-based (SP-based) methods and information retrieval-based (IR-based) methods.
arXiv Detail & Related papers (2021-08-15T08:14:54Z) - Narrative Question Answering with Cutting-Edge Open-Domain QA
Techniques: A Comprehensive Study [45.9120218818558]
We benchmark the research on the NarrativeQA dataset with experiments with cutting-edge ODQA techniques.
This quantifies the challenges Book QA poses, as well as advances the published state-of-the-art with a $sim$7% absolute improvement on Rouge-L.
Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.
arXiv Detail & Related papers (2021-06-07T17:46:09Z) - Retrieving and Reading: A Comprehensive Survey on Open-domain Question
Answering [62.88322725956294]
We review the latest research trends in OpenQA, with particular attention to systems that incorporate neural MRC techniques.
We introduce modern OpenQA architecture named Retriever-Reader'' and analyze the various systems that follow this architecture.
We then discuss key challenges to developing OpenQA systems and offer an analysis of benchmarks that are commonly used.
arXiv Detail & Related papers (2021-01-04T04:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.