Meta-Learning for Domain Generalization in Semantic Parsing
- URL: http://arxiv.org/abs/2010.11988v2
- Date: Mon, 12 Apr 2021 20:40:38 GMT
- Title: Meta-Learning for Domain Generalization in Semantic Parsing
- Authors: Bailin Wang, Mirella Lapata and Ivan Titov
- Abstract summary: We use a meta-learning framework which targets zero-shot domain for semantic parsing.
We apply a model-agnostic training algorithm that simulates zero-shot parsing virtual train and test sets from disjoint domains.
- Score: 124.32975734073949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The importance of building semantic parsers which can be applied to new
domains and generate programs unseen at training has long been acknowledged,
and datasets testing out-of-domain performance are becoming increasingly
available. However, little or no attention has been devoted to learning
algorithms or objectives which promote domain generalization, with virtually
all existing approaches relying on standard supervised learning. In this work,
we use a meta-learning framework which targets zero-shot domain generalization
for semantic parsing. We apply a model-agnostic training algorithm that
simulates zero-shot parsing by constructing virtual train and test sets from
disjoint domains. The learning objective capitalizes on the intuition that
gradient steps that improve source-domain performance should also improve
target-domain performance, thus encouraging a parser to generalize to unseen
target domains. Experimental results on the (English) Spider and Chinese Spider
datasets show that the meta-learning objective significantly boosts the
performance of a baseline parser.
Related papers
- PiPa++: Towards Unification of Domain Adaptive Semantic Segmentation via Self-supervised Learning [34.786268652516355]
Unsupervised domain adaptive segmentation aims to improve the segmentation accuracy of models on target domains without relying on labeled data from those domains.
It seeks to align the feature representations of the source domain (where labeled data is available) and the target domain (where only unlabeled data is present)
arXiv Detail & Related papers (2024-07-24T08:53:29Z) - Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis [33.86086075084374]
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis.
We propose a Large Language Model-based Continual Learning (textttLLM-CL) model for ABSA.
arXiv Detail & Related papers (2024-05-09T02:00:07Z) - Generalized Semantic Segmentation by Self-Supervised Source Domain
Projection and Multi-Level Contrastive Learning [79.0660895390689]
Deep networks trained on the source domain show degraded performance when tested on unseen target domain data.
We propose a Domain Projection and Contrastive Learning (DPCL) approach for generalized semantic segmentation.
arXiv Detail & Related papers (2023-03-03T13:07:14Z) - CLIP the Gap: A Single Domain Generalization Approach for Object
Detection [60.20931827772482]
Single Domain Generalization tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain.
We propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts.
We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss.
arXiv Detail & Related papers (2023-01-13T12:01:18Z) - Meta-Learned Feature Critics for Domain Generalized Semantic
Segmentation [38.81908956978064]
We propose a novel meta-learning scheme with feature disentanglement ability, which derives domain-invariant features for semantic segmentation with domain generalization guarantees.
Our results on benchmark datasets confirm the effectiveness and robustness of our proposed model.
arXiv Detail & Related papers (2021-12-27T06:43:39Z) - Context-Conditional Adaptation for Recognizing Unseen Classes in Unseen
Domains [48.17225008334873]
We propose a feature generative framework integrated with a COntext COnditional Adaptive (COCOA) Batch-Normalization.
The generated visual features better capture the underlying data distribution enabling us to generalize to unseen classes and domains at test-time.
We thoroughly evaluate and analyse our approach on established large-scale benchmark - DomainNet.
arXiv Detail & Related papers (2021-07-15T17:51:16Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.