A Multi-stage Framework with Mean Subspace Computation and Recursive
Feedback for Online Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2207.00003v1
- Date: Fri, 24 Jun 2022 03:50:34 GMT
- Title: A Multi-stage Framework with Mean Subspace Computation and Recursive
Feedback for Online Unsupervised Domain Adaptation
- Authors: Jihoon Moon, Debasmit Das, C. S. George Lee
- Abstract summary: We propose a novel framework to solve real-world situations when the target data are unlabeled and arriving online sequentially in batches.
To project the data from the source and the target domains to a common subspace and manipulate the projected data in real-time, our proposed framework institutes a novel method.
Experiments on six datasets were conducted to investigate in depth the effect and contribution of each stage in our proposed framework.
- Score: 9.109788577327503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the Online Unsupervised Domain Adaptation (OUDA)
problem and propose a novel multi-stage framework to solve real-world
situations when the target data are unlabeled and arriving online sequentially
in batches. To project the data from the source and the target domains to a
common subspace and manipulate the projected data in real-time, our proposed
framework institutes a novel method, called an Incremental Computation of
Mean-Subspace (ICMS) technique, which computes an approximation of mean-target
subspace on a Grassmann manifold and is proven to be a close approximate to the
Karcher mean. Furthermore, the transformation matrix computed from the
mean-target subspace is applied to the next target data in the
recursive-feedback stage, aligning the target data closer to the source domain.
The computation of transformation matrix and the prediction of next-target
subspace leverage the performance of the recursive-feedback stage by
considering the cumulative temporal dependency among the flow of the target
subspace on the Grassmann manifold. The labels of the transformed target data
are predicted by the pre-trained source classifier, then the classifier is
updated by the transformed data and predicted labels. Extensive experiments on
six datasets were conducted to investigate in depth the effect and contribution
of each stage in our proposed framework and its performance over previous
approaches in terms of classification accuracy and computational speed. In
addition, the experiments on traditional manifold-based learning models and
neural-network-based learning models demonstrated the applicability of our
proposed framework for various types of learning models.
Related papers
- Stratified Domain Adaptation: A Progressive Self-Training Approach for Scene Text Recognition [1.2878987353423252]
Unsupervised domain adaptation (UDA) has become increasingly prevalent in scene text recognition (STR)
We introduce the Stratified Domain Adaptation (StrDA) approach, which examines the gradual escalation of the domain gap for the learning process.
We propose a novel method for employing domain discriminators to estimate the out-of-distribution and domain discriminative levels of data samples.
arXiv Detail & Related papers (2024-10-13T16:40:48Z) - Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Ranking Distance Calibration for Cross-Domain Few-Shot Learning [91.22458739205766]
Recent progress in few-shot learning promotes a more realistic cross-domain setting.
Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited.
We employ a re-ranking process for calibrating a target distance matrix by discovering the reciprocal k-nearest neighbours within the task.
arXiv Detail & Related papers (2021-12-01T03:36:58Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Graph Constrained Data Representation Learning for Human Motion
Segmentation [14.611777974037194]
We propose a novel unsupervised model that learns a representation of the data and digs clustering information from the data itself.
Experimental results on four benchmark datasets for HMS demonstrate that our approach achieves significantly better clustering performance then state-of-the-art methods.
arXiv Detail & Related papers (2021-07-28T13:49:16Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z) - Adversarial Weighting for Domain Adaptation in Regression [4.34858896385326]
We present a novel instance-based approach to handle regression tasks in the context of supervised domain adaptation.
We develop an adversarial network algorithm which learns both the source weighting scheme and the task in one feed-forward gradient descent.
arXiv Detail & Related papers (2020-06-15T09:44:04Z) - Multi-step Online Unsupervised Domain Adaptation [10.312968200748116]
We propose a multi-step framework for the Online Unsupervised Domain Adaptation problem.
We compute the mean-target subspace inspired by the geometrical interpretation on the Euclidean space.
The transformation matrix computed from the mean-target subspace is applied to the next target data as a preprocessing step.
arXiv Detail & Related papers (2020-02-20T18:26:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.