Domain Adaptation for Question Answering via Question Classification
- URL: http://arxiv.org/abs/2209.04998v1
- Date: Mon, 12 Sep 2022 03:12:02 GMT
- Title: Domain Adaptation for Question Answering via Question Classification
- Authors: Zhenrui Yue, Huimin Zeng, Ziyi Kou, Lanyu Shang, Dong Wang
- Abstract summary: We propose a novel framework: Question Classification for Question Answering (QC4QA)
For optimization, inter-domain discrepancy between the source and target domain is reduced via maximum mean discrepancy (MMD) distance.
We demonstrate the effectiveness of the proposed QC4QA with consistent improvements against the state-of-the-art baselines on multiple datasets.
- Score: 8.828396559882954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Question answering (QA) has demonstrated impressive progress in answering
questions from customized domains. Nevertheless, domain adaptation remains one
of the most elusive challenges for QA systems, especially when QA systems are
trained in a source domain but deployed in a different target domain. In this
work, we investigate the potential benefits of question classification for QA
domain adaptation. We propose a novel framework: Question Classification for
Question Answering (QC4QA). Specifically, a question classifier is adopted to
assign question classes to both the source and target data. Then, we perform
joint training in a self-supervised fashion via pseudo-labeling. For
optimization, inter-domain discrepancy between the source and target domain is
reduced via maximum mean discrepancy (MMD) distance. We additionally minimize
intra-class discrepancy among QA samples of the same question class for
fine-grained adaptation performance. To the best of our knowledge, this is the
first work in QA domain adaptation to leverage question classification with
self-supervised adaptation. We demonstrate the effectiveness of the proposed
QC4QA with consistent improvements against the state-of-the-art baselines on
multiple datasets.
Related papers
- SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - DomainInv: Domain Invariant Fine Tuning and Adversarial Label Correction
For QA Domain Adaptation [27.661609140918916]
Existing Question Answering (QA) systems limited by the capability of answering questions from unseen domain or any out-of-domain distributions.
Most importantly all the existing QA domain adaptation methods are either based on generating synthetic data or pseudo labeling the target domain data.
In this paper, we propose the unsupervised domain adaptation for unlabeled target domain by transferring the target representation near to source domain while still using the supervision from source domain.
arXiv Detail & Related papers (2023-05-04T18:13:17Z) - QA Domain Adaptation using Hidden Space Augmentation and Self-Supervised
Contrastive Adaptation [24.39026345750824]
Question answering (QA) has recently shown impressive results for answering questions from customized domains.
Yet, a common challenge is to adapt QA models to an unseen target domain.
We propose a novel self-supervised framework called QADA for QA domain adaptation.
arXiv Detail & Related papers (2022-10-19T19:52:57Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Multifaceted Improvements for Conversational Open-Domain Question
Answering [54.913313912927045]
We propose a framework with Multifaceted Improvements for Conversational open-domain Question Answering (MICQA)
Firstly, the proposed KL-divergence based regularization is able to lead to a better question understanding for retrieval and answer reading.
Second, the added post-ranker module can push more relevant passages to the top placements and be selected for reader with a two-aspect constrains.
Third, the well designed curriculum learning strategy effectively narrows the gap between the golden passage settings of training and inference, and encourages the reader to find true answer without the golden passage assistance.
arXiv Detail & Related papers (2022-04-01T07:54:27Z) - Synthetic Question Value Estimation for Domain Adaptation of Question
Answering [31.003053719921628]
We introduce a novel idea of training a question value estimator (QVE) that directly estimates the usefulness of synthetic questions for improving the target-domain QA performance.
By using such questions and only around 15% of the human annotations on the target domain, we can achieve comparable performance to the fully-supervised baselines.
arXiv Detail & Related papers (2022-03-16T20:22:31Z) - Relation-Guided Pre-Training for Open-Domain Question Answering [67.86958978322188]
We propose a Relation-Guided Pre-Training (RGPT-QA) framework to solve complex open-domain questions.
We show that RGPT-QA achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
arXiv Detail & Related papers (2021-09-21T17:59:31Z) - Contrastive Domain Adaptation for Question Answering using Limited Text
Corpora [20.116147632481983]
We propose a novel framework for domain adaptation called contrastive domain adaptation for QA.
Specifically, CAQA combines techniques from question generation and domain-invariant learning to answer out-of-domain questions in settings with limited text corpora.
arXiv Detail & Related papers (2021-08-31T14:05:55Z) - Knowledge Graph Simple Question Answering for Unseen Domains [9.263766921991452]
We propose a data-centric domain adaptation framework that is applicable to new domains.
We use distant supervision to extract a set of keywords that express each relation of the unseen domain.
Our framework significantly improves over zero-shot baselines and is robust across domains.
arXiv Detail & Related papers (2020-05-25T11:34:54Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.