Match-And-Deform: Time Series Domain Adaptation through Optimal
Transport and Temporal Alignment
- URL: http://arxiv.org/abs/2308.12686v2
- Date: Fri, 25 Aug 2023 09:12:02 GMT
- Title: Match-And-Deform: Time Series Domain Adaptation through Optimal
Transport and Temporal Alignment
- Authors: Fran\c{c}ois Painblanc, Laetitia Chapel, Nicolas Courty, Chlo\'e
Friguet, Charlotte Pelletier, and Romain Tavenard
- Abstract summary: We introduce the Match-And-Deform (MAD) approach that aims at finding correspondences between the source and target time series.
When embedded into a deep neural network, MAD helps learning new representations of time series that both align the domains.
Empirical studies on benchmark datasets and remote sensing data demonstrate that MAD makes meaningful sample-to-sample pairing and time shift estimation.
- Score: 10.89671409446191
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While large volumes of unlabeled data are usually available, associated
labels are often scarce. The unsupervised domain adaptation problem aims at
exploiting labels from a source domain to classify data from a related, yet
different, target domain. When time series are at stake, new difficulties arise
as temporal shifts may appear in addition to the standard feature distribution
shift. In this paper, we introduce the Match-And-Deform (MAD) approach that
aims at finding correspondences between the source and target time series while
allowing temporal distortions. The associated optimization problem
simultaneously aligns the series thanks to an optimal transport loss and the
time stamps through dynamic time warping. When embedded into a deep neural
network, MAD helps learning new representations of time series that both align
the domains and maximize the discriminative power of the network. Empirical
studies on benchmark datasets and remote sensing data demonstrate that MAD
makes meaningful sample-to-sample pairing and time shift estimation, reaching
similar or better classification performance than state-of-the-art deep time
series domain adaptation strategies.
Related papers
- Time and Frequency Synergy for Source-Free Time-Series Domain Adaptations [13.206527803092817]
This paper proposes Time Frequency Domain Adaptation (TFDA) to cope with source-free time-series domain adaptation problems.
TFDA is developed with a dual branch network structure fully utilizing both time and frequency features in delivering final predictions.
arXiv Detail & Related papers (2024-10-23T02:29:50Z) - Multi-Source and Test-Time Domain Adaptation on Multivariate Signals using Spatio-Temporal Monge Alignment [59.75420353684495]
Machine learning applications on signals such as computer vision or biomedical data often face challenges due to the variability that exists across hardware devices or session recordings.
In this work, we propose Spatio-Temporal Monge Alignment (STMA) to mitigate these variabilities.
We show that STMA leads to significant and consistent performance gains between datasets acquired with very different settings.
arXiv Detail & Related papers (2024-07-19T13:33:38Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Evidentially Calibrated Source-Free Time-Series Domain Adaptation with Temporal Imputation [38.88779207555418]
Source-free domain adaptation (SFDA) aims to adapt a model pre-trained on a labeled source domain to an unlabeled target domain without access to source data.
This paper proposes MAsk And imPUte (MAPU), a novel and effective approach for time series SFDA.
We also introduce E-MAPU, which incorporates evidential uncertainty estimation to address the overconfidence issue inherent in softmax predictions.
arXiv Detail & Related papers (2024-06-04T05:36:29Z) - SMORE: Similarity-based Hyperdimensional Domain Adaptation for
Multi-Sensor Time Series Classification [17.052624039805856]
We propose SMORE, a novel resource-efficient domain adaptation (DA) algorithm for multi-sensor time series classification.
SMORE achieves on average 1.98% higher accuracy than state-of-the-art (SOTA) DNN-based DA algorithms with 18.81x faster training and 4.63x faster inference.
arXiv Detail & Related papers (2024-02-20T18:48:49Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Contrastive Domain Adaptation for Time-Series via Temporal Mixup [14.723714504015483]
We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
arXiv Detail & Related papers (2022-12-03T06:53:38Z) - HyperTime: Implicit Neural Representation for Time Series [131.57172578210256]
Implicit neural representations (INRs) have recently emerged as a powerful tool that provides an accurate and resolution-independent encoding of data.
In this paper, we analyze the representation of time series using INRs, comparing different activation functions in terms of reconstruction accuracy and training convergence speed.
We propose a hypernetwork architecture that leverages INRs to learn a compressed latent representation of an entire time series dataset.
arXiv Detail & Related papers (2022-08-11T14:05:51Z) - Graph Attention Recurrent Neural Networks for Correlated Time Series
Forecasting -- Full version [16.22449727526222]
We consider a setting where multiple entities inter-act with each other over time and the time-varying statuses of the entities are represented as correlated time series.
To enable accurate forecasting on correlated time series, we proposes graph attention recurrent neural networks.
Experiments on a large real-world speed time series data set suggest that the proposed method is effective and outperforms the state-of-the-art in most settings.
arXiv Detail & Related papers (2021-03-19T12:15:37Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.