Regressive Domain Adaptation for Unsupervised Keypoint Detection
- URL: http://arxiv.org/abs/2103.06175v1
- Date: Wed, 10 Mar 2021 16:45:22 GMT
- Title: Regressive Domain Adaptation for Unsupervised Keypoint Detection
- Authors: Junguang Jiang, Yifei Ji, Ximei Wang, Yufeng Liu, Jianmin Wang,
Mingsheng Long
- Abstract summary: Domain adaptation (DA) aims at transferring knowledge from a labeled source domain to an unlabeled target domain.
We present a method of regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Our method brings large improvement by 8% to 11% in terms of PCK on different datasets.
- Score: 67.2950306888855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) aims at transferring knowledge from a labeled source
domain to an unlabeled target domain. Though many DA theories and algorithms
have been proposed, most of them are tailored into classification settings and
may fail in regression tasks, especially in the practical keypoint detection
task. To tackle this difficult but significant task, we present a method of
regressive domain adaptation (RegDA) for unsupervised keypoint detection.
Inspired by the latest theoretical work, we first utilize an adversarial
regressor to maximize the disparity on the target domain and train a feature
generator to minimize this disparity. However, due to the high dimension of the
output space, this regressor fails to detect samples that deviate from the
support of the source. To overcome this problem, we propose two important
ideas. First, based on our observation that the probability density of the
output space is sparse, we introduce a spatial probability distribution to
describe this sparsity and then use it to guide the learning of the adversarial
regressor. Second, to alleviate the optimization difficulty in the
high-dimensional space, we innovatively convert the minimax game in the
adversarial training to the minimization of two opposite goals. Extensive
experiments show that our method brings large improvement by 8% to 11% in terms
of PCK on different datasets.
Related papers
- Uncertainty-Guided Alignment for Unsupervised Domain Adaptation in
Regression [5.939858158928473]
Unsupervised Domain Adaptation for Regression aims to adapt a model from a labeled source domain to an unlabeled target domain for regression tasks.
Recent successful works in UDAR mostly focus on subspace alignment, involving the alignment of a selected subspace within the entire feature space.
We propose an effective method for UDAR by incorporating guidance from uncertainty.
arXiv Detail & Related papers (2024-01-24T14:55:02Z) - Two-Stage Adaptive Network for Semi-Supervised Cross-Domain Crater Detection under Varying Scenario Distributions [17.28368878719324]
We propose a two-stage adaptive network (TAN) for cross-domain crater detection.
Our network is built on the YOLOv5 detector, where a series of strategies are employed to enhance its cross-domain generalisation ability.
Experimental results on benchmark datasets demonstrate that the proposed network can enhance domain adaptation ability for crater detection under varying scenario distributions.
arXiv Detail & Related papers (2023-12-11T07:16:49Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.