Taxonomy-Structured Domain Adaptation
- URL: http://arxiv.org/abs/2306.07874v2
- Date: Sat, 1 Jul 2023 20:39:15 GMT
- Title: Taxonomy-Structured Domain Adaptation
- Authors: Tianyi Liu, Zihao Xu, Hao He, Guang-Yuan Hao, Guang-He Lee, Hao Wang
- Abstract summary: We tackle a generalization with taxonomy-structured domains, which formalizes domains with nested, hierarchical similarity structures.
We build on the classic adversarial framework and introduce a novel taxonomist, which competes with the adversarial discriminator to preserve the taxonomy information.
- Score: 21.432546714330023
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain adaptation aims to mitigate distribution shifts among different
domains. However, traditional formulations are mostly limited to categorical
domains, greatly simplifying nuanced domain relationships in the real world. In
this work, we tackle a generalization with taxonomy-structured domains, which
formalizes domains with nested, hierarchical similarity structures such as
animal species and product catalogs. We build on the classic adversarial
framework and introduce a novel taxonomist, which competes with the adversarial
discriminator to preserve the taxonomy information. The equilibrium recovers
the classic adversarial domain adaptation's solution if given a non-informative
domain taxonomy (e.g., a flat taxonomy where all leaf nodes connect to the root
node) while yielding non-trivial results with other taxonomies. Empirically,
our method achieves state-of-the-art performance on both synthetic and
real-world datasets with successful adaptation. Code is available at
https://github.com/Wang-ML-Lab/TSDA.
Related papers
- Cross-Domain Semantic Segmentation on Inconsistent Taxonomy using VLMs [1.4182672294839365]
Cross-Domain Semantic on Inconsistent taxonomy using Vision Language Models (CSI)
This paper introduces a novel approach, Cross-Domain Semantic on Inconsistent taxonomy using Vision Language Models (CSI)
It effectively performs domain-adaptive semantic segmentation even in situations of source-target class mismatches.
arXiv Detail & Related papers (2024-08-05T06:32:20Z) - Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - TADA: Taxonomy Adaptive Domain Adaptation [143.68890984935726]
Traditional domain adaptation addresses the task of adapting a model to a novel target domain under limited supervision.
We introduce the more general taxonomy adaptive domain adaptation problem, allowing for inconsistent between the two domains.
On the label-level, we employ a bilateral mixed sampling strategy to augment the target domain, and a relabelling method to unify and align the label spaces.
arXiv Detail & Related papers (2021-09-10T11:58:56Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Vicinal and categorical domain adaptation [43.707303372718336]
We propose novel losses of adversarial training at both domain and category levels.
We propose a concept of vicinal domains whose instances are produced by a convex combination of pairs of instances respectively from the two domains.
arXiv Detail & Related papers (2021-03-05T03:47:24Z) - CoRel: Seed-Guided Topical Taxonomy Construction by Concept Learning and
Relation Transferring [37.1330815281983]
We propose a method for seed-guided topical taxonomy construction, which takes a corpus and a seed taxonomy described by concept names as input.
A relation transferring module learns and transfers the user's interested relation along multiple paths to expand the seed taxonomy structure in width and depth.
A concept learning module enriches the semantics of each concept node by jointly embedding the taxonomy.
arXiv Detail & Related papers (2020-10-13T22:00:31Z) - Octet: Online Catalog Taxonomy Enrichment with Self-Supervision [67.26804972901952]
We present a self-supervised end-to-end framework, Octet for Online Catalog EnrichmenT.
We propose to train a sequence labeling model for term extraction and employ graph neural networks (GNNs) to capture the taxonomy structure.
Octet enriches an online catalog in production to 2 times larger in the open-world evaluation.
arXiv Detail & Related papers (2020-06-18T04:53:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.