Learning Distinctive Margin toward Active Domain Adaptation
- URL: http://arxiv.org/abs/2203.05738v1
- Date: Fri, 11 Mar 2022 03:30:58 GMT
- Title: Learning Distinctive Margin toward Active Domain Adaptation
- Authors: Ming Xie, Yuxi Li, Yabiao Wang, Zekun Luo, Zhenye Gan, Zhongyi Sun,
Mingmin Chi, Chengjie Wang, Pei Wang
- Abstract summary: In this work, we propose a concise but effective ADA method called Select-by-Distinctive-Margin (SDM)
SDM consists of a maximum margin loss and a margin sampling algorithm for data selection.
We benchmark SDM with standard active learning setting, demonstrating our algorithm achieves competitive results with good data scalability.
- Score: 27.091800612463455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite plenty of efforts focusing on improving the domain adaptation ability
(DA) under unsupervised or few-shot semi-supervised settings, recently the
solution of active learning started to attract more attention due to its
suitability in transferring model in a more practical way with limited
annotation resource on target data. Nevertheless, most active learning methods
are not inherently designed to handle domain gap between data distribution, on
the other hand, some active domain adaptation methods (ADA) usually requires
complicated query functions, which is vulnerable to overfitting. In this work,
we propose a concise but effective ADA method called
Select-by-Distinctive-Margin (SDM), which consists of a maximum margin loss and
a margin sampling algorithm for data selection. We provide theoretical analysis
to show that SDM works like a Support Vector Machine, storing hard examples
around decision boundaries and exploiting them to find informative and
transferable data. In addition, we propose two variants of our method, one is
designed to adaptively adjust the gradient from margin loss, the other boosts
the selectivity of margin sampling by taking the gradient direction into
account. We benchmark SDM with standard active learning setting, demonstrating
our algorithm achieves competitive results with good data scalability. Code is
available at https://github.com/TencentYoutuResearch/ActiveLearning-SDM
Related papers
- Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - Flexible deep transfer learning by separate feature embeddings and
manifold alignment [0.0]
Object recognition is a key enabler across industry and defense.
Unfortunately, algorithms trained on existing labeled datasets do not directly generalize to new data because the data distributions do not match.
We propose a novel deep learning framework that overcomes this limitation by learning separate feature extractions for each domain.
arXiv Detail & Related papers (2020-12-22T19:24:44Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Online Meta-Learning for Multi-Source and Semi-Supervised Domain
Adaptation [4.1799778475823315]
We propose a framework to enhance performance by meta-learning the initial conditions of existing DA algorithms.
We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA)
We achieve state of the art results on several DA benchmarks including the largest scale DomainNet.
arXiv Detail & Related papers (2020-04-09T07:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.