Time-Series Domain Adaptation via Sparse Associative Structure
Alignment: Learning Invariance and Variance
- URL: http://arxiv.org/abs/2205.03554v1
- Date: Sat, 7 May 2022 05:06:36 GMT
- Title: Time-Series Domain Adaptation via Sparse Associative Structure
Alignment: Learning Invariance and Variance
- Authors: Zijian Li, Ruichu Cai, Jiawei Chen, Yuguan Yan, Wei Chen, Keli Zhang,
Junjian Ye
- Abstract summary: Domain adaptation on time-series data is often encountered in the industry but received limited attention in academia.
We propose Sparse Associative structure alignment by learning Invariance and Variance.
We extract the domain-invariant unweighted sparse associative structures with a unidirectional alignment restriction and embed the domain-variant strengths via a well-designed autoregressive module.
- Score: 20.838533947309802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation on time-series data is often encountered in the industry
but received limited attention in academia. Most of the existing domain
adaptation methods for time-series data borrow the ideas from the existing
methods for non-time series data to extract the domain-invariant
representation. However, two peculiar difficulties to time-series data have not
been solved. 1) It is not a trivial task to model the domain-invariant and
complex dependence among different timestamps. 2) The domain-variant
information is important but how to leverage them is almost underexploited.
Fortunately, the stableness of causal structures among different domains
inspires us to explore the structures behind the time-series data. Based on
this inspiration, we investigate the domain-invariant unweighted sparse
associative structures and the domain-variant strengths of the structures. To
achieve this, we propose Sparse Associative structure alignment by learning
Invariance and Variance (SASA-IV in short), a model that simultaneously aligns
the invariant unweighted spare associative structures and considers the variant
information for time-series unsupervised domain adaptation. Technologically, we
extract the domain-invariant unweighted sparse associative structures with a
unidirectional alignment restriction and embed the domain-variant strengths via
a well-designed autoregressive module. Experimental results not only testify
that our model yields state-of-the-art performance on three real-world datasets
but also provide some insightful discoveries on knowledge transfer.
Related papers
- Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation [50.31351006532924]
Human pose estimation (HPE) has received increasing attention recently due to its wide application in motion analysis, virtual reality, healthcare, etc.
It suffers from the lack of labeled diverse real-world datasets due to the time- and labor-intensive annotation.
We introduce a novel framework that capitalizes on both representation aggregation and segregation for domain adaptive human pose estimation.
arXiv Detail & Related papers (2024-12-29T17:59:45Z) - Learning Latent Spaces for Domain Generalization in Time Series Forecasting [60.29403194508811]
Time series forecasting is vital in many real-world applications, yet developing models that generalize well on unseen relevant domains remains underexplored.
We propose a framework for domain generalization in time series forecasting by mining the latent factors that govern temporal dependencies across domains.
Our approach uses a decomposition-based architecture with a new Conditional $beta$-Variational Autoencoder (VAE), wherein time series data is first decomposed into trend-cyclical and seasonal components.
arXiv Detail & Related papers (2024-12-15T12:41:53Z) - tPARAFAC2: Tracking evolving patterns in (incomplete) temporal data [0.7285444492473742]
We introduce t(emporal)PARAFAC2 which utilizes temporal smoothness regularization on the evolving factors.
Our numerical experiments on both simulated and real datasets demonstrate the effectiveness of the temporal smoothness regularization.
arXiv Detail & Related papers (2024-07-01T15:10:55Z) - Enhancing Evolving Domain Generalization through Dynamic Latent
Representations [47.3810472814143]
We propose a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
MISTS learns both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information.
arXiv Detail & Related papers (2024-01-16T16:16:42Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Context-aware Domain Adaptation for Time Series Anomaly Detection [69.3488037353497]
Time series anomaly detection is a challenging task with a wide range of real-world applications.
Recent efforts have been devoted to time series domain adaptation to leverage knowledge from similar domains.
We propose a framework that combines context sampling and anomaly detection into a joint learning procedure.
arXiv Detail & Related papers (2023-04-15T02:28:58Z) - Domain Generalization In Robust Invariant Representation [10.132611239890345]
In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
arXiv Detail & Related papers (2023-04-07T00:58:30Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Time Series Domain Adaptation via Sparse Associative Structure Alignment [29.003081310633323]
We propose a novel sparse associative structure alignment model for domain adaptation.
First, we generate the segment set to exclude the obstacle of offsets.
Second, the intra-variables and inter-variables sparse attention mechanisms are devised to extract associative structure time-series data.
Third, the associative structure alignment is used to guide the transfer of knowledge from the source domain to the target one.
arXiv Detail & Related papers (2020-12-22T02:30:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.