Cross Domain Few-Shot Learning via Meta Adversarial Training
- URL: http://arxiv.org/abs/2202.05713v1
- Date: Fri, 11 Feb 2022 15:52:29 GMT
- Title: Cross Domain Few-Shot Learning via Meta Adversarial Training
- Authors: Jirui Qi, Richong Zhang, Chune Li, Yongyi Mao
- Abstract summary: Few-shot relation classification (RC) is one of the critical problems in machine learning.
We present a novel model that takes into consideration the afore-mentioned cross-domain situation.
A meta-based adversarial training framework is proposed to fine-tune the trained networks for adapting to data from the target domain.
- Score: 34.383449283927014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot relation classification (RC) is one of the critical problems in
machine learning. Current research merely focuses on the set-ups that both
training and testing are from the same domain. However, in practice, this
assumption is not always guaranteed. In this study, we present a novel model
that takes into consideration the afore-mentioned cross-domain situation. Not
like previous models, we only use the source domain data to train the
prototypical networks and test the model on target domain data. A meta-based
adversarial training framework (\textbf{MBATF}) is proposed to fine-tune the
trained networks for adapting to data from the target domain. Empirical studies
confirm the effectiveness of the proposed model.
Related papers
- Quality > Quantity: Synthetic Corpora from Foundation Models for
Closed-Domain Extractive Question Answering [35.38140071573828]
We study extractive question answering within closed domains and introduce the concept of targeted pre-training.
Our proposed framework uses Galactica to generate synthetic, targeted'' corpora that align with specific writing styles and topics.
arXiv Detail & Related papers (2023-10-25T20:48:16Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - GAN-based Domain Inference Attack [3.731168012111833]
We propose a generative adversarial network (GAN) based method to explore likely or similar domains of a target model.
We find that the target model may distract the training procedure less if the domain is more similar to the target domain.
Our experiments show that the auxiliary dataset from an MDI top-ranked domain can effectively boost the result of model-inversion attacks.
arXiv Detail & Related papers (2022-12-22T15:40:53Z) - Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from
Mixture-of-Experts [33.21435044949033]
Most existing methods perform training on multiple source domains using a single model.
We propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process.
arXiv Detail & Related papers (2022-10-08T02:28:10Z) - One-Class Knowledge Distillation for Face Presentation Attack Detection [53.30584138746973]
This paper introduces a teacher-student framework to improve the cross-domain performance of face PAD with one-class domain adaptation.
Student networks are trained to mimic the teacher network and learn similar representations for genuine face samples of the target domain.
In the test phase, the similarity score between the representations of the teacher and student networks is used to distinguish attacks from genuine ones.
arXiv Detail & Related papers (2022-05-08T06:20:59Z) - Towards Online Domain Adaptive Object Detection [79.89082006155135]
Existing object detection models assume both the training and test data are sampled from the same source domain.
We propose a novel unified adaptation framework that adapts and improves generalization on the target domain in online settings.
arXiv Detail & Related papers (2022-04-11T17:47:22Z) - Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target
Data [95.47859525676246]
A recent study finds that existing few-shot learning methods, trained on the source domain, fail to generalize to the novel target domain when a domain gap is observed.
In this paper, we realize that the labeled target data in Cross-Domain Few-Shot Learning has not been leveraged in any way to help the learning process.
arXiv Detail & Related papers (2021-07-26T06:15:45Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - A Brief Review of Domain Adaptation [1.2043574473965317]
This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain.
It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.
arXiv Detail & Related papers (2020-10-07T07:05:32Z) - Test-time Unsupervised Domain Adaptation [3.4188171733930584]
Convolutional neural networks rarely generalise to different scanners or acquisition protocols (target domain)
We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject.
arXiv Detail & Related papers (2020-10-05T11:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.