Domain Adaptation for Semantic Parsing
- URL: http://arxiv.org/abs/2006.13071v1
- Date: Tue, 23 Jun 2020 14:47:41 GMT
- Title: Domain Adaptation for Semantic Parsing
- Authors: Zechang Li, Yuxuan Lai, Yansong Feng, Dongyan Zhao
- Abstract summary: We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
- Score: 68.81787666086554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, semantic parsing has attracted much attention in the community.
Although many neural modeling efforts have greatly improved the performance, it
still suffers from the data scarcity issue. In this paper, we propose a novel
semantic parser for domain adaptation, where we have much fewer annotated data
in the target domain compared to the source domain. Our semantic parser
benefits from a two-stage coarse-to-fine framework, thus can provide different
and accurate treatments for the two stages, i.e., focusing on domain invariant
and domain specific information, respectively. In the coarse stage, our novel
domain discrimination component and domain relevance attention encourage the
model to learn transferable domain general structures. In the fine stage, the
model is guided to concentrate on domain related details. Experiments on a
benchmark dataset show that our method consistently outperforms several popular
domain adaptation strategies. Additionally, we show that our model can well
exploit limited target data to capture the difference between the source and
target domain, even when the target domain has far fewer training instances.
Related papers
- Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Domain Adaptation from Scratch [24.612696638386623]
We present a new learning setup, domain adaptation from scratch'', which we believe to be crucial for extending the reach of NLP to sensitive domains.
In this setup, we aim to efficiently annotate data from a set of source domains such that the trained model performs well on a sensitive target domain.
Our study compares several approaches for this challenging setup, ranging from data selection and domain adaptation algorithms to active learning paradigms.
arXiv Detail & Related papers (2022-09-02T05:55:09Z) - ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic
Segmentation [23.813813896293876]
We present ADeADA, a general active domain adaptation framework for semantic segmentation.
With less than 5% target domain annotations, our method reaches comparable results with that of full supervision.
arXiv Detail & Related papers (2022-02-14T05:17:38Z) - Unsupervised Domain Adaptation for Semantic Segmentation via Low-level
Edge Information Transfer [27.64947077788111]
Unsupervised domain adaptation for semantic segmentation aims to make models trained on synthetic data adapt to real images.
Previous feature-level adversarial learning methods only consider adapting models on the high-level semantic features.
We present the first attempt at explicitly using low-level edge information, which has a small inter-domain gap, to guide the transfer of semantic information.
arXiv Detail & Related papers (2021-09-18T11:51:31Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Multi-Source Domain Adaptation with Collaborative Learning for Semantic
Segmentation [32.95273803359897]
Multi-source unsupervised domain adaptation(MSDA) aims at adapting models trained on multiple labeled source domains to an unlabeled target domain.
We propose a novel multi-source domain adaptation framework based on collaborative learning for semantic segmentation.
arXiv Detail & Related papers (2021-03-08T12:51:42Z) - Domain Adaptation on Semantic Segmentation for Aerial Images [3.946367634483361]
We propose a novel unsupervised domain adaptation framework to address domain shift in semantic image segmentation.
We also apply entropy minimization on the target domain to produce high-confident prediction.
We show improvement over state-of-the-art methods in terms of various metrics.
arXiv Detail & Related papers (2020-12-03T20:58:27Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.