Learning Distinctive Margin toward Active Domain Adaptation
- URL: http://arxiv.org/abs/2203.05738v1
- Date: Fri, 11 Mar 2022 03:30:58 GMT
- Title: Learning Distinctive Margin toward Active Domain Adaptation
- Authors: Ming Xie, Yuxi Li, Yabiao Wang, Zekun Luo, Zhenye Gan, Zhongyi Sun,
Mingmin Chi, Chengjie Wang, Pei Wang
- Abstract summary: In this work, we propose a concise but effective ADA method called Select-by-Distinctive-Margin (SDM)
SDM consists of a maximum margin loss and a margin sampling algorithm for data selection.
We benchmark SDM with standard active learning setting, demonstrating our algorithm achieves competitive results with good data scalability.
- Score: 27.091800612463455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite plenty of efforts focusing on improving the domain adaptation ability
(DA) under unsupervised or few-shot semi-supervised settings, recently the
solution of active learning started to attract more attention due to its
suitability in transferring model in a more practical way with limited
annotation resource on target data. Nevertheless, most active learning methods
are not inherently designed to handle domain gap between data distribution, on
the other hand, some active domain adaptation methods (ADA) usually requires
complicated query functions, which is vulnerable to overfitting. In this work,
we propose a concise but effective ADA method called
Select-by-Distinctive-Margin (SDM), which consists of a maximum margin loss and
a margin sampling algorithm for data selection. We provide theoretical analysis
to show that SDM works like a Support Vector Machine, storing hard examples
around decision boundaries and exploiting them to find informative and
transferable data. In addition, we propose two variants of our method, one is
designed to adaptively adjust the gradient from margin loss, the other boosts
the selectivity of margin sampling by taking the gradient direction into
account. We benchmark SDM with standard active learning setting, demonstrating
our algorithm achieves competitive results with good data scalability. Code is
available at https://github.com/TencentYoutuResearch/ActiveLearning-SDM
Related papers
- Unsupervised Domain Adaptation Via Data Pruning [0.0]
We consider the problem from the perspective of unsupervised domain adaptation (UDA)
We propose AdaPrune, a method for UDA whereby training examples are removed to attempt to align the training distribution to that of the target data.
As a method for UDA, we show that AdaPrune outperforms related techniques, and is complementary to other UDA algorithms such as CORAL.
arXiv Detail & Related papers (2024-09-18T15:48:59Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Mean-AP Guided Reinforced Active Learning for Object Detection [31.304039641225504]
This paper introduces Mean-AP Guided Reinforced Active Learning for Object Detection (MGRAL)
MGRAL is a novel approach that leverages the concept of expected model output changes as informativeness for deep detection networks.
Our approach demonstrates strong performance, establishing a new paradigm in reinforcement learning-based active learning for object detection.
arXiv Detail & Related papers (2023-10-12T14:59:22Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.