MSTAR: Multi-Scale Backbone Architecture Search for Timeseries
Classification
- URL: http://arxiv.org/abs/2402.13822v1
- Date: Wed, 21 Feb 2024 13:59:55 GMT
- Title: MSTAR: Multi-Scale Backbone Architecture Search for Timeseries
Classification
- Authors: Tue M. Cao, Nhat H. Tran, Hieu H. Pham, Hung T. Nguyen, and Le P.
Nguyen
- Abstract summary: We propose a novel multi-scale search space and a framework for Neural architecture search (NAS)
We show that our model can serve as a backbone to employ a powerful Transformer module with both untrained and pre-trained weights.
Our search space reaches the state-of-the-art performance on four datasets on four different domains.
- Score: 0.41185655356953593
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of the previous approaches to Time Series Classification (TSC) highlight
the significance of receptive fields and frequencies while overlooking the time
resolution. Hence, unavoidably suffered from scalability issues as they
integrated an extensive range of receptive fields into classification models.
Other methods, while having a better adaptation for large datasets, require
manual design and yet not being able to reach the optimal architecture due to
the uniqueness of each dataset. We overcome these challenges by proposing a
novel multi-scale search space and a framework for Neural architecture search
(NAS), which addresses both the problem of frequency and time resolution,
discovering the suitable scale for a specific dataset. We further show that our
model can serve as a backbone to employ a powerful Transformer module with both
untrained and pre-trained weights. Our search space reaches the
state-of-the-art performance on four datasets on four different domains while
introducing more than ten highly fine-tuned models for each data.
Related papers
- User-friendly Foundation Model Adapters for Multivariate Time Series Classification [16.94369040048502]
Foundation models, while highly effective, are often resource-intensive, requiring substantial inference time and memory.
This paper addresses the challenge of making these models more accessible with limited computational resources by exploring dimensionality reduction techniques.
Our experiments show up to a 10x speedup compared to the baseline model, without performance degradation, and enable up to 4.5x more datasets to fit on a single GPU.
arXiv Detail & Related papers (2024-09-18T18:50:20Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Time Series Representation Models [2.724184832774005]
Time series analysis remains a major challenge due to its sparse characteristics, high dimensionality, and inconsistent data quality.
Recent advancements in transformer-based techniques have enhanced capabilities in forecasting and imputation.
We propose a new architectural concept for time series analysis based on introspection.
arXiv Detail & Related papers (2024-05-28T13:25:31Z) - UniCL: A Universal Contrastive Learning Framework for Large Time Series Models [18.005358506435847]
Time-series analysis plays a pivotal role across a range of critical applications, from finance to healthcare.
Traditional supervised learning methods first annotate extensive labels for time-series data in each task.
This paper introduces UniCL, a universal and scalable contrastive learning framework designed for pretraining time-series foundation models.
arXiv Detail & Related papers (2024-05-17T07:47:11Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv Detail & Related papers (2024-02-04T20:00:45Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - MADS: Modulated Auto-Decoding SIREN for time series imputation [9.673093148930874]
We propose MADS, a novel auto-decoding framework for time series imputation, built upon implicit neural representations.
We evaluate our model on two real-world datasets, and show that it outperforms state-of-the-art methods for time series imputation.
arXiv Detail & Related papers (2023-07-03T09:08:47Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - PIETS: Parallelised Irregularity Encoders for Forecasting with
Heterogeneous Time-Series [5.911865723926626]
Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis.
In this work, we design a novel architecture, PIETS, to model heterogeneous time-series.
We show that PIETS is able to effectively model heterogeneous temporal data and outperforms other state-of-the-art approaches in the prediction task.
arXiv Detail & Related papers (2021-09-30T20:01:19Z) - AdaXpert: Adapting Neural Architecture for Growing Data [63.30393509048505]
In real-world applications, data often come in a growing manner, where the data volume and the number of classes may increase dynamically.
Given the increasing data volume or the number of classes, one has to instantaneously adjust the neural model capacity to obtain promising performance.
Existing methods either ignore the growing nature of data or seek to independently search an optimal architecture for a given dataset.
arXiv Detail & Related papers (2021-07-01T07:22:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.