Learning to Generalize for Cross-domain QA
- URL: http://arxiv.org/abs/2305.08208v2
- Date: Thu, 18 May 2023 10:46:20 GMT
- Title: Learning to Generalize for Cross-domain QA
- Authors: Yingjie Niu, Linyi Yang, Ruihai Dong, Yue Zhang
- Abstract summary: We propose a novel approach that combines prompting methods and linear probing then fine-tuning strategy.
Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.
Our method can be easily integrated into any pre-trained models and offers a promising solution to the under-explored cross-domain QA task.
- Score: 11.627572092891226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There have been growing concerns regarding the out-of-domain generalization
ability of natural language processing (NLP) models, particularly in
question-answering (QA) tasks. Current synthesized data augmentation methods
for QA are hampered by increased training costs. To address this issue, we
propose a novel approach that combines prompting methods and linear probing
then fine-tuning strategy, which does not entail additional cost. Our method
has been theoretically and empirically shown to be effective in enhancing the
generalization ability of both generative and discriminative models. Our
approach outperforms state-of-the-art baselines, with an average increase in F1
score of 4.5%-7.9%. Furthermore, our method can be easily integrated into any
pre-trained models and offers a promising solution to the under-explored
cross-domain QA task. We release our source code at GitHub*.
Related papers
- Towards Robust Extractive Question Answering Models: Rethinking the Training Methodology [0.34530027457862006]
Previous research has shown that existing models, when trained on EQA datasets that include unanswerable questions, demonstrate a significant lack of robustness.
Our proposed training method includes a novel loss function for the EQA problem and challenges an implicit assumption present in numerous EQA datasets.
Our models exhibit significantly enhanced robustness against two types of adversarial attacks, with a performance decrease of only about a third compared to the default models.
arXiv Detail & Related papers (2024-09-29T20:35:57Z) - Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models [102.72940700598055]
In reasoning tasks, even a minor error can cascade into inaccurate results.
We develop a method that avoids introducing external resources, relying instead on perturbations to the input.
Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks.
arXiv Detail & Related papers (2024-03-04T16:21:54Z) - Multi-Task Learning for Routing Problem with Cross-Problem Zero-Shot Generalization [18.298695520665348]
Vehicle routing problems (VRPs) can be found in numerous real-world applications.
In this work, we make the first attempt to tackle the crucial challenge of cross-problem generalization.
Our proposed model can successfully solve VRPs with unseen attribute combinations in a zero-shot generalization manner.
arXiv Detail & Related papers (2024-02-23T13:25:23Z) - Attention-guided Generative Models for Extractive Question Answering [17.476450946279037]
Recently, pretrained generative sequence-to-sequence (seq2seq) models have achieved great success in question answering.
We propose a simple strategy to obtain an extractive answer span from the generative model by leveraging the decoder cross-attention patterns.
arXiv Detail & Related papers (2021-10-12T23:02:35Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Learning to Perturb Word Embeddings for Out-of-distribution QA [55.103586220757464]
We propose a simple yet effective DA method based on a noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics.
We validate the performance of the QA models trained with our word embedding on a single source dataset, on five different target domains.
Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.
arXiv Detail & Related papers (2021-05-06T14:12:26Z) - UnitedQA: A Hybrid Approach for Open Domain Question Answering [70.54286377610953]
We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models.
Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.
arXiv Detail & Related papers (2021-01-01T06:36:16Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z) - Logic-Guided Data Augmentation and Regularization for Consistent
Question Answering [55.05667583529711]
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions.
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.
arXiv Detail & Related papers (2020-04-21T17:03:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.