Uncertainty Awareness on Unsupervised Domain Adaptation for Time Series Data
- URL: http://arxiv.org/abs/2508.18630v1
- Date: Tue, 26 Aug 2025 03:13:08 GMT
- Title: Uncertainty Awareness on Unsupervised Domain Adaptation for Time Series Data
- Authors: Weide Liu, Xiaoyang Zhong, Lu Wang, Jingwen Hou, Yuemei Luo, Jiebin Yan, Yuming Fang,
- Abstract summary: Unsupervised domain adaptation methods seek to generalize effectively on unlabeled test data.<n>We propose incorporating multi-scale feature extraction and uncertainty estimation to improve the model's generalization and robustness across domains.
- Score: 49.36938105983916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation methods seek to generalize effectively on unlabeled test data, especially when encountering the common challenge in time series data that distribution shifts occur between training and testing datasets. In this paper, we propose incorporating multi-scale feature extraction and uncertainty estimation to improve the model's generalization and robustness across domains. Our approach begins with a multi-scale mixed input architecture that captures features at different scales, increasing training diversity and reducing feature discrepancies between the training and testing domains. Based on the mixed input architecture, we further introduce an uncertainty awareness mechanism based on evidential learning by imposing a Dirichlet prior on the labels to facilitate both target prediction and uncertainty estimation. The uncertainty awareness mechanism enhances domain adaptation by aligning features with the same labels across different domains, which leads to significant performance improvements in the target domain. Additionally, our uncertainty-aware model demonstrates a much lower Expected Calibration Error (ECE), indicating better-calibrated prediction confidence. Our experimental results show that this combined approach of mixed input architecture with the uncertainty awareness mechanism achieves state-of-the-art performance across multiple benchmark datasets, underscoring its effectiveness in unsupervised domain adaptation for time series data.
Related papers
- CoCAI: Copula-based Conformal Anomaly Identification for Multivariate Time-Series [0.3495246564946556]
We propose a novel framework that harnesses the power of generative artificial intelligence and copula-based modeling to deliver accurate predictions and enable robust anomaly detection.
arXiv Detail & Related papers (2025-07-23T14:15:31Z) - Does Unsupervised Domain Adaptation Improve the Robustness of Amortized Bayesian Inference? A Systematic Evaluation [3.4109073456116477]
Recent robust approaches employ unsupervised domain adaptation (UDA) to match the embedding spaces of simulated and observed data.<n>We demonstrate that aligning summary spaces between domains effectively mitigates the impact of unmodeled phenomena or noise.<n>Our results underscore the need for careful consideration of misspecification types when using UDA to increase the robustness of ABI.
arXiv Detail & Related papers (2025-02-07T14:13:51Z) - Adapting Prediction Sets to Distribution Shifts Without Labels [16.478151550456804]
We focus on a standard set-valued prediction framework called conformal prediction (CP)<n>This paper studies how to improve its practical performance using only unlabeled data from the shifted test domain.<n>We show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.
arXiv Detail & Related papers (2024-06-03T15:16:02Z) - Training of Neural Networks with Uncertain Data: A Mixture of Experts Approach [0.0]
"Uncertainty-aware Mixture of Experts" (uMoE) is a novel solution aimed at addressing aleatoric uncertainty within Neural Network (NN) based predictive models.
Our findings demonstrate the superior performance of uMoE over baseline methods in effectively managing data uncertainty.
This innovative approach boasts broad applicability across diverse da-ta-driven domains, including but not limited to biomedical signal processing, autonomous driving, and production quality control.
arXiv Detail & Related papers (2023-12-13T11:57:15Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.