ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free
Domain Adaptation
- URL: http://arxiv.org/abs/2205.14566v1
- Date: Sun, 29 May 2022 03:45:00 GMT
- Title: ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free
Domain Adaptation
- Authors: Yuhe Ding, Lijun Sheng, Jian Liang, Aihua Zheng, Ran He
- Abstract summary: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an effective method named Proxy-based Mixup training with label refinery ( ProxyMix)
Experiments on three 2D image and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix yields state-of-the-art performance for source-free UDA tasks.
- Score: 73.14508297140652
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a
labeled source domain to an unlabeled target domain. Owing to privacy concerns
and heavy data transmission, source-free UDA, exploiting the pre-trained source
models instead of the raw source data for target learning, has been gaining
popularity in recent years. Some works attempt to recover unseen source domains
with generative models, however introducing additional network parameters.
Other works propose to fine-tune the source model by pseudo labels, while noisy
pseudo labels may misguide the decision boundary, leading to unsatisfied
results. To tackle these issues, we propose an effective method named
Proxy-based Mixup training with label refinery (ProxyMix). First of all, to
avoid additional parameters and explore the information in the source model,
ProxyMix defines the weights of the classifier as the class prototypes and then
constructs a class-balanced proxy source domain by the nearest neighbors of the
prototypes to bridge the unseen source domain and the target domain. To improve
the reliability of pseudo labels, we further propose the frequency-weighted
aggregation strategy to generate soft pseudo labels for unlabeled target data.
The proposed strategy exploits the internal structure of target features, pulls
target features to their semantic neighbors, and increases the weights of
low-frequency classes samples during gradient updating. With the proxy domain
and the reliable pseudo labels, we employ two kinds of mixup regularization,
i.e., inter- and intra-domain mixup, in our framework, to align the proxy and
the target domain, enforcing the consistency of predictions, thereby further
mitigating the negative impacts of noisy labels. Experiments on three 2D image
and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix
yields state-of-the-art performance for source-free UDA tasks.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - De-Confusing Pseudo-Labels in Source-Free Domain Adaptation [14.954662088592762]
Source-free domain adaptation aims to adapt a source-trained model to an unlabeled target domain without access to the source data.
We introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings.
arXiv Detail & Related papers (2024-01-03T10:07:11Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Towards Robust Cross-domain Image Understanding with Unsupervised Noise
Removal [18.21213151403402]
We find that contemporary domain adaptation methods for cross-domain image understanding perform poorly when source domain is noisy.
We propose a novel method, termed Noise Tolerant Domain Adaptation, for Weakly Supervised Domain Adaptation (WSDA)
We conduct extensive experiments to evaluate the effectiveness of our method on both general images and medical images from COVID-19 and e-commerce datasets.
arXiv Detail & Related papers (2021-09-09T14:06:59Z) - Divergence Optimization for Noisy Universal Domain Adaptation [32.05829135903389]
Universal domain adaptation (UniDA) has been proposed to transfer knowledge learned from a label-rich source domain to a label-scarce target domain.
This paper introduces a two-head convolutional neural network framework to solve all problems simultaneously.
arXiv Detail & Related papers (2021-04-01T04:16:04Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.