Universal Domain Adaptation Benchmark for Time Series Data Representation
- URL: http://arxiv.org/abs/2505.17899v1
- Date: Fri, 23 May 2025 13:47:35 GMT
- Title: Universal Domain Adaptation Benchmark for Time Series Data Representation
- Authors: Romain Mussard, Fannia Pacheco, Maxime Berar, Gilles Gasso, Paul Honeine,
- Abstract summary: This work provides a comprehensive implementation and comparison of state-of-the-art TS backbones in a UniDA framework.<n>We propose a reliable protocol to evaluate their robustness and generalization across different domains.<n>Our results highlight the critical influence of backbone selection in UniDA performance.
- Score: 8.877926274964251
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models have significantly improved the ability to detect novelties in time series (TS) data. This success is attributed to their strong representation capabilities. However, due to the inherent variability in TS data, these models often struggle with generalization and robustness. To address this, a common approach is to perform Unsupervised Domain Adaptation, particularly Universal Domain Adaptation (UniDA), to handle domain shifts and emerging novel classes. While extensively studied in computer vision, UniDA remains underexplored for TS data. This work provides a comprehensive implementation and comparison of state-of-the-art TS backbones in a UniDA framework. We propose a reliable protocol to evaluate their robustness and generalization across different domains. The goal is to provide practitioners with a framework that can be easily extended to incorporate future advancements in UniDA and TS architectures. Our results highlight the critical influence of backbone selection in UniDA performance and enable a robustness analysis across various datasets and architectures.
Related papers
- A Study on Unsupervised Domain Adaptation for Semantic Segmentation in the Era of Vision-Language Models [1.2499537119440245]
Domain shifts are one of the major challenges in deep learning based computer vision.
UDA methods have emerged which adapt a model to a new target domain by only using unlabeled data of that domain.
Recent vision-language models have demonstrated strong generalization capabilities which may facilitate domain adaptation.
We show that replacing the encoder of existing UDA methods by a vision-language pre-trained encoder can result in significant performance improvements.
arXiv Detail & Related papers (2024-11-25T14:12:24Z) - Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series [61.91288852233078]
In time series anomaly detection, the scarcity of labeled data poses a challenge to the development of accurate models.<n>We propose a novel Domain Contrastive learning model for Anomaly Detection in time series (DACAD)<n>Our model employs supervised contrastive loss for the source domain and self-supervised contrastive triplet loss for the target domain.
arXiv Detail & Related papers (2024-04-17T11:20:14Z) - Test-Time Domain Generalization for Face Anti-Spoofing [60.94384914275116]
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks.
We introduce a novel Test-Time Domain Generalization framework for FAS, which leverages the testing data to boost the model's generalizability.
Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space.
arXiv Detail & Related papers (2024-03-28T11:50:23Z) - Unified Domain Adaptive Semantic Segmentation [96.74199626935294]
Unsupervised Adaptive Domain Semantic (UDA-SS) aims to transfer the supervision from a labeled source domain to an unlabeled target domain.<n>We propose a Quad-directional Mixup (QuadMix) method, characterized by tackling distinct point attributes and feature inconsistencies.<n>Our method outperforms the state-of-the-art works by large margins on four challenging UDA-SS benchmarks.
arXiv Detail & Related papers (2023-11-22T09:18:49Z) - Universal Domain Adaptation for Robust Handling of Distributional Shifts
in NLP [25.4952909342458]
Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision.
We propose a benchmark for natural language that offers thorough viewpoints of the model's generalizability and robustness.
arXiv Detail & Related papers (2023-10-23T12:15:25Z) - UniTime: A Language-Empowered Unified Model for Cross-Domain Time Series
Forecasting [59.11817101030137]
This research advocates for a unified model paradigm that transcends domain boundaries.
Learning an effective cross-domain model presents the following challenges.
We propose UniTime for effective cross-domain time series learning.
arXiv Detail & Related papers (2023-10-15T06:30:22Z) - Contrastive Learning for Unsupervised Domain Adaptation of Time Series [29.211602179219316]
Unsupervised domain adaptation (UDA) aims at learning a machine learning model using a labeled source domain that performs well on a similar yet different, unlabeled target domain.
We develop a novel framework for UDA of time series data, called CLUDA.
We show that our framework achieves state-of-the-art performance for time series UDA.
arXiv Detail & Related papers (2022-06-13T15:23:31Z) - Exploring Data Aggregation and Transformations to Generalize across
Visual Domains [0.0]
This thesis contributes to research on Domain Generalization (DG), Domain Adaptation (DA) and their variations.
We propose new frameworks for Domain Generalization and Domain Adaptation which make use of feature aggregation strategies and visual transformations.
We show how our proposed solutions outperform competitive state-of-the-art approaches in established DG and DA benchmarks.
arXiv Detail & Related papers (2021-08-20T14:58:14Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Multi-Source Deep Domain Adaptation with Weak Supervision for
Time-Series Sensor Data [31.43183992755392]
We propose a novel Convolutional deep Domain Adaptation model for Time Series data (CoDATS)
Second, we propose a novel Domain Adaptation with Weak Supervision (DA-WS) method by utilizing weak supervision in the form of target-domain label distributions.
Third, we perform comprehensive experiments on diverse real-world datasets to evaluate the effectiveness of our domain adaptation and weak supervision methods.
arXiv Detail & Related papers (2020-05-22T04:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.