Synthetic Question Value Estimation for Domain Adaptation of Question
Answering
- URL: http://arxiv.org/abs/2203.08926v1
- Date: Wed, 16 Mar 2022 20:22:31 GMT
- Title: Synthetic Question Value Estimation for Domain Adaptation of Question
Answering
- Authors: Xiang Yue and Ziyu Yao and Huan Sun
- Abstract summary: We introduce a novel idea of training a question value estimator (QVE) that directly estimates the usefulness of synthetic questions for improving the target-domain QA performance.
By using such questions and only around 15% of the human annotations on the target domain, we can achieve comparable performance to the fully-supervised baselines.
- Score: 31.003053719921628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthesizing QA pairs with a question generator (QG) on the target domain has
become a popular approach for domain adaptation of question answering (QA)
models. Since synthetic questions are often noisy in practice, existing work
adapts scores from a pretrained QA (or QG) model as criteria to select
high-quality questions. However, these scores do not directly serve the
ultimate goal of improving QA performance on the target domain. In this paper,
we introduce a novel idea of training a question value estimator (QVE) that
directly estimates the usefulness of synthetic questions for improving the
target-domain QA performance. By conducting comprehensive experiments, we show
that the synthetic questions selected by QVE can help achieve better
target-domain QA performance, in comparison with existing techniques. We
additionally show that by using such questions and only around 15% of the human
annotations on the target domain, we can achieve comparable performance to the
fully-supervised baselines.
Related papers
- KaPQA: Knowledge-Augmented Product Question-Answering [59.096607961704656]
We introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products.
We also propose a novel knowledge-driven RAG-QA framework to enhance the performance of the models in the product QA task.
arXiv Detail & Related papers (2024-07-22T22:14:56Z) - QADYNAMICS: Training Dynamics-Driven Synthetic QA Diagnostic for
Zero-Shot Commonsense Question Answering [48.25449258017601]
State-of-the-art approaches fine-tune language models on QA pairs constructed from CommonSense Knowledge Bases.
We propose QADYNAMICS, a training dynamics-driven framework for QA diagnostics and refinement.
arXiv Detail & Related papers (2023-10-17T14:27:34Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - Improving Visual Question Answering Models through Robustness Analysis
and In-Context Learning with a Chain of Basic Questions [70.70725223310401]
This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models.
The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models.
arXiv Detail & Related papers (2023-04-06T15:32:35Z) - QA Domain Adaptation using Hidden Space Augmentation and Self-Supervised
Contrastive Adaptation [24.39026345750824]
Question answering (QA) has recently shown impressive results for answering questions from customized domains.
Yet, a common challenge is to adapt QA models to an unseen target domain.
We propose a novel self-supervised framework called QADA for QA domain adaptation.
arXiv Detail & Related papers (2022-10-19T19:52:57Z) - Domain Adaptation for Question Answering via Question Classification [8.828396559882954]
We propose a novel framework: Question Classification for Question Answering (QC4QA)
For optimization, inter-domain discrepancy between the source and target domain is reduced via maximum mean discrepancy (MMD) distance.
We demonstrate the effectiveness of the proposed QC4QA with consistent improvements against the state-of-the-art baselines on multiple datasets.
arXiv Detail & Related papers (2022-09-12T03:12:02Z) - Improving Unsupervised Question Answering via Summarization-Informed
Question Generation [47.96911338198302]
Question Generation (QG) is the task of generating a plausible question for a passage, answer> pair.
We make use of freely available news summary data, transforming declarative sentences into appropriate questions using dependency parsing, named entity recognition and semantic role labeling.
The resulting questions are then combined with the original news articles to train an end-to-end neural QG model.
arXiv Detail & Related papers (2021-09-16T13:08:43Z) - Exploring Question-Specific Rewards for Generating Deep Questions [42.243227323241584]
We design three different rewards that target to improve the fluency, relevance, and answerability of generated questions.
We find that optimizing question-specific rewards generally leads to better performance in automatic evaluation metrics.
arXiv Detail & Related papers (2020-11-02T16:37:30Z) - Generating Diverse and Consistent QA pairs from Contexts with
Information-Maximizing Hierarchical Conditional VAEs [62.71505254770827]
We propose a conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts.
Our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.
arXiv Detail & Related papers (2020-05-28T08:26:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.