S$^3$VAADA: Submodular Subset Selection for Virtual Adversarial Active
Domain Adaptation
- URL: http://arxiv.org/abs/2109.08901v1
- Date: Sat, 18 Sep 2021 10:53:57 GMT
- Title: S$^3$VAADA: Submodular Subset Selection for Virtual Adversarial Active
Domain Adaptation
- Authors: Harsh Rangwani, Arihant Jain, Sumukh K Aithal and R. Venkatesh Babu
- Abstract summary: In the real-world scenario's it might be feasible to get labels for a small proportion of target data.
We propose S$3$VAADA which i) introduces a novel submodular criterion to select a maximally informative subset to label and ii) enhances a cluster-based DA procedure.
Our approach consistently outperforms the competing state-of-the-art approaches on datasets with varying degrees of domain shifts.
- Score: 49.01925978752677
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Unsupervised domain adaptation (DA) methods have focused on achieving maximal
performance through aligning features from source and target domains without
using labeled data in the target domain. Whereas, in the real-world scenario's
it might be feasible to get labels for a small proportion of target data. In
these scenarios, it is important to select maximally-informative samples to
label and find an effective way to combine them with the existing knowledge
from source data. Towards achieving this, we propose S$^3$VAADA which i)
introduces a novel submodular criterion to select a maximally informative
subset to label and ii) enhances a cluster-based DA procedure through novel
improvements to effectively utilize all the available data for improving
generalization on target. Our approach consistently outperforms the competing
state-of-the-art approaches on datasets with varying degrees of domain shifts.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Divide and Adapt: Active Domain Adaptation via Customized Learning [56.79144758380419]
We present Divide-and-Adapt (DiaNA), a new ADA framework that partitions the target instances into four categories with stratified transferable properties.
With a novel data subdivision protocol based on uncertainty and domainness, DiaNA can accurately recognize the most gainful samples.
Thanks to the "divideand-adapt" spirit, DiaNA can handle data with large variations of domain gap.
arXiv Detail & Related papers (2023-07-21T14:37:17Z) - MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - Combating Label Distribution Shift for Active Domain Adaptation [16.270897459117755]
We consider the problem of active domain adaptation (ADA) to unlabeled target data.
Inspired by recent analysis on a critical issue from label distribution mismatch between source and target in domain adaptation, we devise a method that addresses the issue for the first time in ADA.
arXiv Detail & Related papers (2022-08-13T09:06:45Z) - ADeADA: Adaptive Density-aware Active Domain Adaptation for Semantic
Segmentation [23.813813896293876]
We present ADeADA, a general active domain adaptation framework for semantic segmentation.
With less than 5% target domain annotations, our method reaches comparable results with that of full supervision.
arXiv Detail & Related papers (2022-02-14T05:17:38Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings [57.24806659432956]
Generalizing deep neural networks to new target domains is critical to their real-world utility.
We study the problem of active learning (AL) under a domain shift, called Active Domain Adaptation (Active DA)
We propose Clustering Uncertainty-weighted Embeddings (CLUE), a novel label acquisition strategy for Active DA that performs uncertainty-weighted clustering to identify target instances for labeling that are both uncertain under the model and diverse in feature space.
arXiv Detail & Related papers (2020-10-16T23:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.