Continual Test-time Domain Adaptation via Dynamic Sample Selection
- URL: http://arxiv.org/abs/2310.03335v2
- Date: Mon, 27 Nov 2023 13:18:11 GMT
- Title: Continual Test-time Domain Adaptation via Dynamic Sample Selection
- Authors: Yanshuo Wang, Jie Hong, Ali Cheraghian, Shafin Rahman, David
Ahmedt-Aristizabal, Lars Petersson, Mehrtash Harandi
- Abstract summary: This paper proposes a Dynamic Sample Selection (DSS) method for Continual Test-time Domain Adaptation (CTDA)
We apply joint positive and negative learning on both high- and low-quality samples to reduce the risk of using wrong information.
Our approach is also evaluated in the 3D point cloud domain, showcasing its versatility and potential for broader applicability.
- Score: 38.82346845855512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The objective of Continual Test-time Domain Adaptation (CTDA) is to gradually
adapt a pre-trained model to a sequence of target domains without accessing the
source data. This paper proposes a Dynamic Sample Selection (DSS) method for
CTDA. DSS consists of dynamic thresholding, positive learning, and negative
learning processes. Traditionally, models learn from unlabeled unknown
environment data and equally rely on all samples' pseudo-labels to update their
parameters through self-training. However, noisy predictions exist in these
pseudo-labels, so all samples are not equally trustworthy. Therefore, in our
method, a dynamic thresholding module is first designed to select suspected
low-quality from high-quality samples. The selected low-quality samples are
more likely to be wrongly predicted. Therefore, we apply joint positive and
negative learning on both high- and low-quality samples to reduce the risk of
using wrong information. We conduct extensive experiments that demonstrate the
effectiveness of our proposed method for CTDA in the image domain,
outperforming the state-of-the-art results. Furthermore, our approach is also
evaluated in the 3D point cloud domain, showcasing its versatility and
potential for broader applicability.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Foster Adaptivity and Balance in Learning with Noisy Labels [26.309508654960354]
We propose a novel approach named textbfSED to deal with label noise in a textbfSelf-adaptivtextbfE and class-balancetextbfD manner.
A mean-teacher model is then employed to correct labels of noisy samples.
We additionally propose a self-adaptive and class-balanced sample re-weighting mechanism to assign different weights to detected noisy samples.
arXiv Detail & Related papers (2024-07-03T03:10:24Z) - Uncertainty Measurement of Deep Learning System based on the Convex Hull of Training Sets [0.13265175299265505]
We propose To-hull Uncertainty and Closure Ratio, which measures an uncertainty of trained model based on the convex hull of training data.
It can observe the positional relation between the convex hull of the learned data and an unseen sample and infer how extrapolate the sample is from the convex hull.
arXiv Detail & Related papers (2024-05-25T06:25:24Z) - A Gradient-based Approach for Online Robust Deep Neural Network Training
with Noisy Labels [27.7867122240632]
In this paper, we propose a novel-based approach to enable the online selection of noisy labels.
Online Gradient-based Selection Selection (OGRS) can automatically select clean samples by steps of update from datasets with varying clean ratios without changing the parameter setting.
arXiv Detail & Related papers (2023-06-08T08:57:06Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Rethinking Precision of Pseudo Label: Test-Time Adaptation via
Complementary Learning [10.396596055773012]
We propose a novel complementary learning approach to enhance test-time adaptation.
In test-time adaptation tasks, information from the source domain is typically unavailable.
We highlight that the risk function of complementary labels agrees with their Vanilla loss formula.
arXiv Detail & Related papers (2023-01-15T03:36:33Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Labeling-Free Comparison Testing of Deep Learning Models [28.47632100019289]
We propose a labeling-free comparison testing approach to overcome the limitations of labeling effort and sampling randomness.
Our approach outperforms the baseline methods by up to 0.74 and 0.53 on Spearman's correlation and Kendall's $tau$, regardless of the dataset and distribution shift.
arXiv Detail & Related papers (2022-04-08T10:55:45Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.