MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection
- URL: http://arxiv.org/abs/2302.04589v1
- Date: Thu, 9 Feb 2023 12:06:08 GMT
- Title: MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection
- Authors: Yuhe Ding, Jian Liang, Bo Jiang, Aihua Zheng, Ran He
- Abstract summary: Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
- Score: 76.97324120775475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing cross-domain keypoint detection methods always require accessing the
source data during adaptation, which may violate the data privacy law and pose
serious security concerns. Instead, this paper considers a realistic problem
setting called source-free domain adaptive keypoint detection, where only the
well-trained source model is provided to the target domain. For the challenging
problem, we first construct a teacher-student learning baseline by stabilizing
the predictions under data augmentation and network ensembles. Built on this,
we further propose a unified approach, Mixup Augmentation and Progressive
Selection (MAPS), to fully exploit the noisy pseudo labels of unlabeled target
data during training. On the one hand, MAPS regularizes the model to favor
simple linear behavior in-between the target samples via self-mixup
augmentation, preventing the model from over-fitting to noisy predictions. On
the other hand, MAPS employs the self-paced learning paradigm and progressively
selects pseudo-labeled samples from `easy' to `hard' into the training process
to reduce noise accumulation. Results on four keypoint detection datasets show
that MAPS outperforms the baseline and achieves comparable or even better
results in comparison to previous non-source-free counterparts.
Related papers
- Evidentially Calibrated Source-Free Time-Series Domain Adaptation with Temporal Imputation [38.88779207555418]
Source-free domain adaptation (SFDA) aims to adapt a model pre-trained on a labeled source domain to an unlabeled target domain without access to source data.
This paper proposes MAsk And imPUte (MAPU), a novel and effective approach for time series SFDA.
We also introduce E-MAPU, which incorporates evidential uncertainty estimation to address the overconfidence issue inherent in softmax predictions.
arXiv Detail & Related papers (2024-06-04T05:36:29Z) - Continual-MAE: Adaptive Distribution Masked Autoencoders for Continual Test-Time Adaptation [49.827306773992376]
Continual Test-Time Adaptation (CTTA) is proposed to migrate a source pre-trained model to continually changing target distributions.
Our proposed method attains state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-12-19T15:34:52Z) - Learning with Noisy labels via Self-supervised Adversarial Noisy Masking [33.87292143223425]
We propose a novel training approach termed adversarial noisy masking.
It adaptively modulates the input data and label simultaneously, preventing the model to overfit noisy samples.
It is tested on both synthetic and real-world noisy datasets.
arXiv Detail & Related papers (2023-02-14T03:13:26Z) - SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation [25.939292305808934]
Unsupervised domain adaptation (UDA) can transfer knowledge learned from rich-label dataset to unlabeled target dataset.
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
arXiv Detail & Related papers (2022-12-12T14:25:40Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - ProxyMix: Proxy-based Mixup Training with Label Refinery for Source-Free
Domain Adaptation [73.14508297140652]
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an effective method named Proxy-based Mixup training with label refinery ( ProxyMix)
Experiments on three 2D image and one 3D point cloud object recognition benchmarks demonstrate that ProxyMix yields state-of-the-art performance for source-free UDA tasks.
arXiv Detail & Related papers (2022-05-29T03:45:00Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - ANL: Anti-Noise Learning for Cross-Domain Person Re-Identification [25.035093667770052]
We propose an Anti-Noise Learning (ANL) approach, which contains two modules.
FDA module is designed to gather the id-related samples and disperse id-unrelated samples, through the camera-wise contrastive learning and adversarial adaptation.
Reliable Sample Selection ( RSS) module utilizes an Auxiliary Model to correct noisy labels and select reliable samples for the Main Model.
arXiv Detail & Related papers (2020-12-27T02:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.