SENTRY: Selective Entropy Optimization via Committee Consistency for
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2012.11460v1
- Date: Mon, 21 Dec 2020 16:24:50 GMT
- Title: SENTRY: Selective Entropy Optimization via Committee Consistency for
Unsupervised Domain Adaptation
- Authors: Viraj Prabhu, Shivam Khare, Deeksha Kartik, Judy Hoffman
- Abstract summary: We propose a UDA algorithm that judges the reliability of a target instance based on its predictive consistency under a committee of random image transformations.
Our algorithm then selectively minimizes predictive entropy to increase confidence on highly consistent target instances, while maximizing predictive entropy to reduce confidence on highly inconsistent ones.
In combination with pseudo-label based approximate target class balancing, our approach leads to significant improvements over the state-of-the-art on 27/31 domain shifts from standard UDA benchmarks as well as benchmarks designed to stress-test adaptation under label distribution shift.
- Score: 14.086066389856173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many existing approaches for unsupervised domain adaptation (UDA) focus on
adapting under only data distribution shift and offer limited success under
additional cross-domain label distribution shift. Recent work based on
self-training using target pseudo-labels has shown promise, but on challenging
shifts pseudo-labels may be highly unreliable, and using them for self-training
may cause error accumulation and domain misalignment. We propose Selective
Entropy Optimization via Committee Consistency (SENTRY), a UDA algorithm that
judges the reliability of a target instance based on its predictive consistency
under a committee of random image transformations. Our algorithm then
selectively minimizes predictive entropy to increase confidence on highly
consistent target instances, while maximizing predictive entropy to reduce
confidence on highly inconsistent ones. In combination with pseudo-label based
approximate target class balancing, our approach leads to significant
improvements over the state-of-the-art on 27/31 domain shifts from standard UDA
benchmarks as well as benchmarks designed to stress-test adaptation under label
distribution shift.
Related papers
- Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [49.827306773992376]
Continual Test-Time Adaptation (CTTA) is proposed to migrate a source pre-trained model to continually changing target distributions.
Our proposed method attains state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-12-19T15:34:52Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Cal-SFDA: Source-Free Domain-adaptive Semantic Segmentation with
Differentiable Expected Calibration Error [50.86671887712424]
The prevalence of domain adaptive semantic segmentation has prompted concerns regarding source domain data leakage.
To circumvent the requirement for source data, source-free domain adaptation has emerged as a viable solution.
We propose a novel calibration-guided source-free domain adaptive semantic segmentation framework.
arXiv Detail & Related papers (2023-08-06T03:28:34Z) - Predicting Class Distribution Shift for Reliable Domain Adaptive Object
Detection [2.5193191501662144]
Unsupervised Domain Adaptive Object Detection (UDA-OD) uses unlabelled data to improve the reliability of robotic vision systems in open-world environments.
Previous approaches to UDA-OD based on self-training have been effective in overcoming changes in the general appearance of images.
We propose a framework for explicitly addressing class distribution shift to improve pseudo-label reliability in self-training.
arXiv Detail & Related papers (2023-02-13T00:46:34Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Source-free Unsupervised Domain Adaptation for Blind Image Quality
Assessment [20.28784839680503]
Existing learning-based methods for blind image quality assessment (BIQA) are heavily dependent on large amounts of annotated training data.
In this paper, we take the first step towards the source-free unsupervised domain adaptation (SFUDA) in a simple yet efficient manner.
We present a group of well-designed self-supervised objectives to guide the adaptation of the BN affine parameters towards the target domain.
arXiv Detail & Related papers (2022-07-17T09:42:36Z) - Boosting Cross-Domain Speech Recognition with Self-Supervision [35.01508881708751]
Cross-domain performance of automatic speech recognition (ASR) could be severely hampered due to mismatch between training and testing distributions.
Previous work has shown that self-supervised learning (SSL) or pseudo-labeling (PL) is effective in UDA by exploiting the self-supervisions of unlabeled data.
This work presents a systematic UDA framework to fully utilize the unlabeled data with self-supervision in the pre-training and fine-tuning paradigm.
arXiv Detail & Related papers (2022-06-20T14:02:53Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Energy-constrained Self-training for Unsupervised Domain Adaptation [25.594991545790638]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a labeled source domain distribution to perform well on an unlabeled target domain.
Recently, the deep self-training involves an iterative process of predicting on the target domain and then taking the confident predictions as hard pseudo-labels for retraining.
In this paper, we resort to the energy-based model and constrain the training of the unlabeled target sample with the energy function minimization objective.
arXiv Detail & Related papers (2021-01-01T21:02:18Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.