Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in
Scene Segmentation
- URL: http://arxiv.org/abs/2103.15685v1
- Date: Mon, 29 Mar 2021 15:12:58 GMT
- Title: Adaptive Boosting for Domain Adaptation: Towards Robust Predictions in
Scene Segmentation
- Authors: Zhedong Zheng and Yi Yang
- Abstract summary: Domain adaptation is to transfer the shared knowledge learned from the source domain to a new environment, i.e., target domain.
One common practice is to train the model on both labeled source-domain data and unlabeled target-domain data.
We propose one efficient bootstrapping method, called Adaboost Student, explicitly learning complementary models during training.
- Score: 41.05407168312345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation is to transfer the shared knowledge learned from the source
domain to a new environment, i.e., target domain. One common practice is to
train the model on both labeled source-domain data and unlabeled target-domain
data. Yet the learned models are usually biased due to the strong supervision
of the source domain. Most researchers adopt the early-stopping strategy to
prevent over-fitting, but when to stop training remains a challenging problem
since the lack of the target-domain validation set. In this paper, we propose
one efficient bootstrapping method, called Adaboost Student, explicitly
learning complementary models during training and liberating users from
empirical early stopping. Adaboost Student combines the deep model learning
with the conventional training strategy, i.e., adaptive boosting, and enables
interactions between learned models and the data sampler. We adopt one adaptive
data sampler to progressively facilitate learning on hard samples and aggregate
``weak'' models to prevent over-fitting. Extensive experiments show that (1)
Without the need to worry about the stopping time, AdaBoost Student provides
one robust solution by efficient complementary model learning during training.
(2) AdaBoost Student is orthogonal to most domain adaptation methods, which can
be combined with existing approaches to further improve the state-of-the-art
performance. We have achieved competitive results on three widely-used scene
segmentation domain adaptation benchmarks.
Related papers
- Source-Free Test-Time Adaptation For Online Surface-Defect Detection [29.69030283193086]
We propose a novel test-time adaptation surface-defect detection approach.
It adapts pre-trained models to new domains and classes during inference.
Experiments demonstrate it outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-08-18T14:24:05Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for
Domain Adaptation [20.697905456202754]
We propose a new self-training framework for domain adaptation, namely Domain adversarial learning enhanced Self-Training Framework (DaMSTF)
DaMSTF involves meta-learning to estimate the importance of each pseudo instance, so as to simultaneously reduce the label noise and preserve hard examples.
DaMSTF improves the performance of BERT with an average of nearly 4%.
arXiv Detail & Related papers (2023-08-05T00:14:49Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.