Exploring Dropout Discriminator for Domain Adaptation
- URL: http://arxiv.org/abs/2107.04231v1
- Date: Fri, 9 Jul 2021 06:11:34 GMT
- Title: Exploring Dropout Discriminator for Domain Adaptation
- Authors: Vinod K Kurmi and Venkatesh K Subramanian and Vinay P. Namboodiri
- Abstract summary: Adaptation of a classifier to new domains is one of the challenging problems in machine learning.
We propose a curriculum based dropout discriminator that gradually increases the variance of the sample based distribution.
An ensemble of discriminators helps the model to learn the data distribution efficiently.
- Score: 27.19677042654432
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adaptation of a classifier to new domains is one of the challenging problems
in machine learning. This has been addressed using many deep and non-deep
learning based methods. Among the methodologies used, that of adversarial
learning is widely applied to solve many deep learning problems along with
domain adaptation. These methods are based on a discriminator that ensures
source and target distributions are close. However, here we suggest that rather
than using a point estimate obtaining by a single discriminator, it would be
useful if a distribution based on ensembles of discriminators could be used to
bridge this gap. This could be achieved using multiple classifiers or using
traditional ensemble methods. In contrast, we suggest that a Monte Carlo
dropout based ensemble discriminator could suffice to obtain the distribution
based discriminator. Specifically, we propose a curriculum based dropout
discriminator that gradually increases the variance of the sample based
distribution and the corresponding reverse gradients are used to align the
source and target feature representations. An ensemble of discriminators helps
the model to learn the data distribution efficiently. It also provides a better
gradient estimates to train the feature extractor. The detailed results and
thorough ablation analysis show that our model outperforms state-of-the-art
results.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Discriminator Guidance for Autoregressive Diffusion Models [12.139222986297264]
We introduce discriminator guidance in the setting of Autoregressive Diffusion Models.
We derive ways of using a discriminator together with a pretrained generative model in the discrete case.
arXiv Detail & Related papers (2023-10-24T13:14:22Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Robust Outlier Rejection for 3D Registration with Variational Bayes [70.98659381852787]
We develop a novel variational non-local network-based outlier rejection framework for robust alignment.
We propose a voting-based inlier searching strategy to cluster the high-quality hypothetical inliers for transformation estimation.
arXiv Detail & Related papers (2023-04-04T03:48:56Z) - Refining Generative Process with Discriminator Guidance in Score-based
Diffusion Models [15.571673352656264]
Discriminator Guidance aims to improve sample generation of pre-trained diffusion models.
Unlike GANs, our approach does not require joint training of score and discriminator networks.
We achive state-of-the-art results on ImageNet 256x256 with FID 1.83 and recall 0.64, similar to the validation data's FID (1.68) and recall (0.66).
arXiv Detail & Related papers (2022-11-28T20:04:12Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation [29.43532067090422]
We propose an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on adversarial learning.
Unlike most existing approaches which employ a generator to deal with domain difference, MMEN focuses on learning the categorical information from unlabeled target samples.
arXiv Detail & Related papers (2019-04-21T13:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.